path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
labs/C2_machine_learning/03_analisis_supervisado_clasificacion/laboratorio_08.ipynb
###Markdown MAT281 - Laboratorio N°02 Objetivos del laboratorio* Reforzar conceptos básicos de clasificación. Contenidos* [Problema 01](p1) I.- Problema 01El conjunto de datos se denomina `creditcard.csv` y consta de varias columnas con información acerca del fraude de tarjetas de crédito, en donde la columna **Class** corresponde a: 0 si no es un fraude y 1 si es un fraude.En este ejercicio se trabajará el problemas de clases desbalancedas. Veamos las primeras cinco filas dle conjunto de datos: ###Code import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix,accuracy_score,recall_score,precision_score,f1_score from sklearn.dummy import DummyClassifier from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier %matplotlib inline sns.set_palette("deep", desat=.6) sns.set(rc={'figure.figsize':(11.7,8.27)}) # cargar datos df = pd.read_csv(os.path.join("data","creditcard.csv"), sep=";") df.head() ###Output _____no_output_____ ###Markdown Analicemos el total de fraudes respecto a los casos que nos son fraudes: ###Code # calcular proporciones df_count = pd.DataFrame() df_count["fraude"] =["no","si"] df_count["total"] = df["Class"].value_counts() df_count["porcentaje"] = 100*df_count["total"] /df_count["total"] .sum() df_count ###Output _____no_output_____ ###Markdown Se observa que menos del 1% corresponde a registros frudulentos. La pregunta que surgen son:* ¿ Cómo deben ser el conjunto de entrenamiento y de testeo?* ¿ Qué modelos ocupar?* ¿ Qué métricas ocupar?Por ejemplo, analicemos el modelos de regresión logística y apliquemos el procedimiento estándar: ###Code # datos y = df.Class X = df.drop('Class', axis=1) # split dataset X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=27) # Creando el modelo lr = LogisticRegression(solver='liblinear').fit(X_train, y_train) # predecir lr_pred = lr.predict(X_test) # calcular accuracy accuracy_score(y_test, lr_pred) ###Output _____no_output_____ ###Markdown En general el modelo tiene un **accuracy** del 99,9%, es decir, un podría suponer que el modelo predice casi perfectamente, pero eso esta lejos de ser así. Para ver por qué es necesario seguir los siguientes pasos: 1. Cambiar la métrica de rendimientoEl primer paso es comparar con distintas métricas, para eso ocupemos las 4 métricas clásicas abordadas en el curso:* accuracy* precision* recall* f-scoreEn este punto deberá poner las métricas correspondientes y comentar sus resultados. ###Code # metrics y_true = list(y_test) y_pred = list(lr.predict(X_test)) print('\nMatriz de confusion:\n ') print(confusion_matrix(y_true,y_pred)) print('\nMetricas:\n ') print('accuracy: ',accuracy_score(y_test, lr_pred)) print('recall: ',recall_score(y_test, lr_pred)) print('precision: ',precision_score(y_test, lr_pred)) print('f-score: ',f1_score(y_test, lr_pred)) print("") ###Output Matriz de confusion: [[12471 16] [ 33 103]] Metricas: accuracy: 0.9961181969420898 recall: 0.7573529411764706 precision: 0.865546218487395 f-score: 0.807843137254902 ###Markdown Respuesta 1 En este caso como queremos comprobar si existe fraude en una tarjeta, nos interesaría analizar la métrica de Recall, ya que nos determina que tan poco nos equivocamos prediciendo, es decir, nos gustaría evitar los falsos negativos. En este caso tenemos un recall de aporximadamente un 75% lo cual a mi juicio es bastante malo ya que estamos hablando de tarjetas bancarias asi que con que un valor este mal predicho afectará demasiado a la empresa. 2. Cambiar algoritmoEl segundo paso es comparar con distintos modelos. Debe tener en cuenta que el modelo ocupaod resuelva el problema supervisado de clasificación.En este punto deberá ajustar un modelo de **random forest**, aplicar las métricas y comparar con el modelo de regresión logística. ###Code # train model rfc = RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1).fit(X_train, y_train) # algoritmo random forest # metrics y_true = list(y_test) y_pred = list(rfc.predict(X_test)) # predicciones con random forest print('\nMatriz de confusion:\n ') print(confusion_matrix(y_true,y_pred)) print('\nMetricas:\n ') print('accuracy: ',accuracy_score(y_true,y_pred)) print('recall: ',recall_score(y_true,y_pred)) print('precision: ',precision_score(y_true,y_pred)) print('f-score: ',f1_score(y_true,y_pred)) print("") ###Output Matriz de confusion: [[12485 2] [ 55 81]] Metricas: accuracy: 0.9954844331775331 recall: 0.5955882352941176 precision: 0.9759036144578314 f-score: 0.7397260273972603 ###Markdown Comparar los datos, intepretar los procentajes y escoger cual metodo es mejor Respuesta 2 Comparando los datos de manera objetiva, vemos que:Las Accuracy son casi iguales, de tal forma que si redondeamos ambas nos entregan un 99%Recall disminuyo notablemente de un 75% hacia un 58%Precisión aumentó de un 86% hacia 97%F-score bajó de 80% hacia 73%Notar que los datos pueden diferir al momento de hacer training pero en general la diferencia se mantuvo como lo explique anteriormente.A modo de análisis como se mencionó anteriormente nos interesa estudiar recall debido al caso de estudio y tuvo un bajo notorio en comparación a las demás métricas, asi que podemos decir que este método es mucho peor que el anterior. 3. Técnicas de remuestreo: sobremuestreo de clase minoritariaEl tercer paso es ocupar ténicas de remuestreo, pero sobre la clase minoritaria. Esto significa que mediantes ténicas de remuestreo trataremos de equiparar el número de elementos de la clase minoritaria a la clase mayoritaria. ###Code from sklearn.utils import resample # concatenar el conjunto de entrenamiento X = pd.concat([X_train, y_train], axis=1) # separar las clases not_fraud = X[X.Class==0] fraud = X[X.Class==1] # remuestrear clase minoritaria fraud_upsampled = resample(fraud, replace=True, # sample with replacement n_samples=len(not_fraud), # match number in majority class random_state=27) # reproducible results # recombinar resultados upsampled = pd.concat([not_fraud, fraud_upsampled]) # chequear el número de elementos por clases upsampled.Class.value_counts() # datos de entrenamiento sobre-balanceados y_train = upsampled.Class X_train = upsampled.drop('Class', axis=1) ###Output _____no_output_____ ###Markdown Ocupando estos nuevos conjunto de entrenamientos, vuelva a aplicar el modelos de regresión logística y calcule las correspondientes métricas. Además, justifique las ventajas y desventjas de este procedimiento. ###Code upsampled = LogisticRegression(solver='liblinear').fit(X_train, y_train) # algoritmo de regresion logistica # metrics y_true = list(y_test) y_pred = list(upsampled.predict(X_test)) print('\nMatriz de confusion:\n ') print(confusion_matrix(y_true,y_pred)) print('\nMetricas:\n ') print('accuracy: ',accuracy_score(y_true,y_pred)) print('recall: ',recall_score(y_true,y_pred)) print('precision: ',precision_score(y_true,y_pred)) print('f-score: ',f1_score(y_true,y_pred)) print("") ###Output Matriz de confusion: [[12200 287] [ 12 124]] Metricas: accuracy: 0.976313079299691 recall: 0.9117647058823529 precision: 0.30170316301703165 f-score: 0.45338208409506403 ###Markdown Respuesta 3 En este caso notamos un claro aumento de la métrica Recall con un 91%, pero la precisión disminuye hacia un 30%, y además F score es del 45%, es decir, el modelo tuvo unca cantidad considerable de errores.En este caso la desventaja es que como se estan rellenando los datos, es posible que se multiplique muchas veces un dato erróneo, mientras que una ventaja es que para efectos de eficiencia es mas sencillo trabajar con datos balanceados. 4. Técnicas de remuestreo - Ejemplo de clase mayoritariaEl cuarto paso es ocupar ténicas de remuestreo, pero sobre la clase mayoritaria. Esto significa que mediantes ténicas de remuestreo trataremos de equiparar el número de elementos de la clase mayoritaria a la clase minoritaria. ###Code # remuestreo clase mayoritaria not_fraud_downsampled = resample(not_fraud, replace = False, # sample without replacement n_samples = len(fraud), # match minority n random_state = 27) # reproducible results # recombinar resultados downsampled = pd.concat([not_fraud_downsampled, fraud]) # chequear el número de elementos por clases downsampled.Class.value_counts() # datos de entrenamiento sub-balanceados y_train = downsampled.Class X_train = downsampled.drop('Class', axis=1) ###Output _____no_output_____ ###Markdown Ocupando estos nuevos conjunto de entrenamientos, vuelva a aplicar el modelos de regresión logística y calcule las correspondientes métricas. Además, justifique las ventajas y desventjas de este procedimiento. ###Code undersampled = LogisticRegression(solver='liblinear').fit(X_train, y_train) # modelo de regresi+on logística # metrics y_true = list(y_test) y_pred = list(undersampled.predict(X_test)) print('\nMatriz de confusion:\n ') print(confusion_matrix(y_true,y_pred)) print('\nMetricas:\n ') print('accuracy: ',accuracy_score(y_true,y_pred)) print('recall: ',recall_score(y_true,y_pred)) print('precision: ',precision_score(y_true,y_pred)) print('f-score: ',f1_score(y_true,y_pred)) print("") ###Output Matriz de confusion: [[12216 271] [ 16 120]] Metricas: accuracy: 0.9772637249465261 recall: 0.8823529411764706 precision: 0.3069053708439898 f-score: 0.45540796963946867 ###Markdown MAT281 - Laboratorio N°02 Objetivos del laboratorio* Reforzar conceptos básicos de clasificación. Contenidos* [Problema 01](p1) I.- Problema 01El conjunto de datos se denomina `creditcard.csv` y consta de varias columnas con información acerca del fraude de tarjetas de crédito, en donde la columna **Class** corresponde a: 0 si no es un fraude y 1 si es un fraude.En este ejercicio se trabajará el problemas de clases desbalancedas. Veamos las primeras cinco filas dle conjunto de datos: ###Code import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix,accuracy_score,recall_score,precision_score,f1_score from sklearn.dummy import DummyClassifier from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier %matplotlib inline sns.set_palette("deep", desat=.6) sns.set(rc={'figure.figsize':(11.7,8.27)}) # cargar datos df = pd.read_csv(os.path.join("data","creditcard.csv"), sep=";") df.head() ###Output _____no_output_____ ###Markdown Analicemos el total de fraudes respecto a los casos que nos son fraudes: ###Code # calcular proporciones df_count = pd.DataFrame() df_count["fraude"] =["no","si"] df_count["total"] = df["Class"].value_counts() df_count["porcentaje"] = 100*df_count["total"] /df_count["total"] .sum() df_count ###Output _____no_output_____ ###Markdown Se observa que menos del 1% corresponde a registros frudulentos. La pregunta que surgen son:* ¿ Cómo deben ser el conjunto de entrenamiento y de testeo?* ¿ Qué modelos ocupar?* ¿ Qué métricas ocupar?Por ejemplo, analicemos el modelos de regresión logística y apliquemos el procedimiento estándar: ###Code # datos y = df.Class X = df.drop('Class', axis=1) # split dataset X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=27) # Creando el modelo lr = LogisticRegression(solver='liblinear').fit(X_train, y_train) # predecir lr_pred = lr.predict(X_test) # calcular accuracy accuracy_score(y_test, lr_pred) ###Output _____no_output_____ ###Markdown En general el modelo tiene un **accuracy** del 99,9%, es decir, un podría suponer que el modelo predice casi perfectamente, pero eso esta lejos de ser así. Para ver por qué es necesario seguir los siguientes pasos: 1. Cambiar la métrica de rendimientoEl primer paso es comparar con distintas métricas, para eso ocupemos las 4 métricas clásicas abordadas en el curso:* accuracy* precision* recall* f-scoreEn este punto deberá poner las métricas correspondientes y comentar sus resultados. ###Code # metrics y_true = list(y_test) y_pred = list(lr.predict(X_test)) print('\nMatriz de confusion:\n ') print(confusion_matrix(y_true,y_pred)) print('\nMetricas:\n ') print('accuracy: ',accuracy_score(y_test, lr_pred)) print('recall: ',recall_score(y_test, lr_pred)) print('precision: ',precision_score(y_test, lr_pred)) print('f-score: ',f1_score(y_test, lr_pred)) print("") ###Output Matriz de confusion: [[12471 16] [ 33 103]] Metricas: accuracy: 0.9961181969420898 recall: 0.7573529411764706 precision: 0.865546218487395 f-score: 0.807843137254902 ###Markdown Como se puede observar al comparar las distintas metricas se obtiene que la que se ajusta mejor al model es accuracy pues esta mas cerca de 1 2. Cambiar algoritmoEl segundo paso es comparar con distintos modelos. Debe tener en cuenta que el modelo ocupaod resuelva el problema supervisado de clasificación.En este punto deberá ajustar un modelo de **random forest**, aplicar las métricas y comparar con el modelo de regresión logística. ###Code # train model rfc = None # algoritmo random forest # metrics y_true = list(y_test) y_pred = list(RandomForestClassifier.predict_proba(X,X_test)) # predicciones con random forest print('\nMatriz de confusion:\n ') print(confusion_matrix(y_true,y_pred)) print('\nMetricas:\n ') print('accuracy: ',accuracy_score(y_true,y_pred)) print('recall: ',recall_score(y_true,y_pred)) print('precision: ',precision_score(y_true,y_pred)) print('f-score: ',f1_score(y_true,y_pred)) print("") ###Output _____no_output_____ ###Markdown 3. Técnicas de remuestreo: sobremuestreo de clase minoritariaEl tercer paso es ocupar ténicas de remuestreo, pero sobre la clase minoritaria. Esto significa que mediantes ténicas de remuestreo trataremos de equiparar el número de elementos de la clase minoritaria a la clase mayoritaria. ###Code from sklearn.utils import resample # concatenar el conjunto de entrenamiento X = pd.concat([X_train, y_train], axis=1) # separar las clases not_fraud = X[X.Class==0] fraud = X[X.Class==1] # remuestrear clase minoritaria fraud_upsampled = resample(fraud, replace=True, # sample with replacement n_samples=len(not_fraud), # match number in majority class random_state=27) # reproducible results # recombinar resultados upsampled = pd.concat([not_fraud, fraud_upsampled]) # chequear el número de elementos por clases upsampled.Class.value_counts() # datos de entrenamiento sobre-balanceados y_train = upsampled.Class X_train = upsampled.drop('Class', axis=1) ###Output _____no_output_____ ###Markdown Ocupando estos nuevos conjunto de entrenamientos, vuelva a aplicar el modelos de regresión logística y calcule las correspondientes métricas. Además, justifique las ventajas y desventjas de este procedimiento. ###Code upsampled = None # algoritmo de regresion logistica # metrics y_true = list(y_test) y_pred = list(upsampled.predict_proba(X_test)) print('\nMatriz de confusion:\n ') print(confusion_matrix(y_true,y_pred)) print('\nMetricas:\n ') print('accuracy: ',accuracy_score(y_true,y_pred)) print('recall: ',recall_score(y_true,y_pred)) print('precision: ',precision_score(y_true,y_pred)) print('f-score: ',f1_score(y_true,y_pred)) print("") ###Output _____no_output_____ ###Markdown 4. Técnicas de remuestreo - Ejemplo de clase mayoritariaEl cuarto paso es ocupar ténicas de remuestreo, pero sobre la clase mayoritaria. Esto significa que mediantes ténicas de remuestreo trataremos de equiparar el número de elementos de la clase mayoritaria a la clase minoritaria. ###Code # remuestreo clase mayoritaria not_fraud_downsampled = resample(not_fraud, replace = False, # sample without replacement n_samples = len(fraud), # match minority n random_state = 27) # reproducible results # recombinar resultados downsampled = pd.concat([not_fraud_downsampled, fraud]) # chequear el número de elementos por clases downsampled.Class.value_counts() # datos de entrenamiento sub-balanceados y_train = downsampled.Class X_train = downsampled.drop('Class', axis=1) ###Output _____no_output_____ ###Markdown Ocupando estos nuevos conjunto de entrenamientos, vuelva a aplicar el modelos de regresión logística y calcule las correspondientes métricas. Además, justifique las ventajas y desventjas de este procedimiento. ###Code # datos from sklearn.linear_model import LogisticRegression X = iris_df[['sepal length (cm)', 'sepal width (cm)']] Y = iris_df['TARGET'] # split dataset X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state = 2) # datos from sklearn.linear_model import LogisticRegression X = iris_df[['sepal length (cm)', 'sepal width (cm)']] Y = iris_df['TARGET'] # split dataset X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state = 2) undersampled = None # modelo de regresi+on logística # metrics y_true = list(y_test) y_pred = list(undersampled.predict(X_test)) print('\nMatriz de confusion:\n ') print(confusion_matrix(y_true,y_pred)) print('\nMetricas:\n ') print('accuracy: ',accuracy_score(y_true,y_pred)) print('recall: ',recall_score(y_true,y_pred)) print('precision: ',precision_score(y_true,y_pred)) print('f-score: ',f1_score(y_true,y_pred)) print("") ###Output _____no_output_____
tp4/tp4_Aliocha_Limbosch.ipynb
###Markdown Année : 1884 ###Code # Charger le texte n=900000 text = open("../data/txt/Bxl_1884_Tome_I1_Part_1.txt" and "../data/txt/Bxl_1884_Tome_I1_Part_2.txt" and "../data/txt/Bxl_1884_Tome_I1_Part_3.txt" and "../data/txt/Bxl_1884_Tome_I1_Part_4.txt" and "../data/txt/Bxl_1884_Tome_I1_Part_5.txt" and "../data/txt/Bxl_1884_Tome_I1_Part_6.txt" and "../data/txt/Bxl_1884_Tome_I1_Part_7.txt" and "../data/txt/Bxl_1884_Tome_I2_Part_1.txt" and "../data/txt/Bxl_1884_Tome_I2_Part_2.txt" and "../data/txt/Bxl_1884_Tome_I2_Part_3.txt" and "../data/txt/Bxl_1884_Tome_I2_Part_4.txt" and "../data/txt/Bxl_1884_Tome_I2_Part_5.txt" and "../data/txt/Bxl_1884_Tome_I2_Part_6.txt" and "../data/txt/Bxl_1884_Tome_I2_Part_7.txt" and "../data/txt/Bxl_1884_Tome_I2_Part_8.txt" and "../data/txt/Bxl_1884_Tome_I2_Part_9.txt" and "../data/txt/Bxl_1884_Tome_I2_Part_10.txt", encoding='utf-8').read()[:n] ###Output _____no_output_____ ###Markdown M. Buchholtz apparait 5 fois dans le corpusRejet apparait 3 fois dans le corpusM. Finet apparait 3 fois dans le corpusPoelaert apparait 3 fois dans le corpusFinet apparait 2 fois dans le corpusOffre apparait 2 fois dans le corpusMM. De Luyck apparait 2 fois dans le corpusD é p ô t apparait 1 fois dans le corpusTerrain apparait 1 fois dans le corpusç o i apparait 1 fois dans le corpusBMEDT apparait 1 fois dans le corpusGranvelle apparait 1 fois dans le corpusAffaire Jardon apparait 1 fois dans le corpuslegs Byl apparait 1 fois dans le corpusRemise apparait 1 fois dans le corpusSALUBRITÉ PUBLIQUE apparait 1 fois dans le corpusM. Kops apparait 1 fois dans le corpusMédecin apparait 1 fois dans le corpusM. Godineau apparait 1 fois dans le corpusLegs apparait 1 fois dans le corpusTAXES apparait 1 fois dans le corpusDefuisseaux apparait 1 fois dans le corpusSabbe apparait 1 fois dans le corpusToussaint apparait 1 fois dans le corpusVan Dystadt apparait 1 fois dans le corpusM. Courouble apparait 1 fois dans le corpusM. Otto apparait 1 fois dans le corpusAdoption apparait 1 fois dans le corpusTravail apparait 1 fois dans le corpusPayer apparait 1 fois dans le corpusVauthier apparait 1 fois dans le corpusJacobs apparait 1 fois dans le corpusM. Patris apparait 1 fois dans le corpusRaspoet apparait 1 fois dans le corpusTordeus apparait 1 fois dans le corpusM. Crombez apparait 1 fois dans le corpusVellut apparait 1 fois dans le corpusM. Sehaffers apparait 1 fois dans le corpusVellnt apparait 1 fois dans le corpusM. Hardy apparait 1 fois dans le corpusOctroi apparait 1 fois dans le corpusInhumations apparait 1 fois dans le corpusUNIVERSITÉ apparait 1 fois dans le corpusProposition apparait 1 fois dans le corpusM. Vanderkindere apparait 1 fois dans le corpusHommage apparait 1 fois dans le corpusVAN BEVERE ET C apparait 1 fois dans le corpusVAN DE MEULEBROECK apparait 1 fois dans le corpusVANDER L1NDEN apparait 1 fois dans le corpusBourgmestre apparait 1 fois dans le corpus ###Code #### Année : 1895 ###Output _____no_output_____ ###Markdown Charger le texten=900000text = open("../data/txt/Bxl_1895_Tome_I1_Part_1.txt" and"../data/txt/Bxl_1895_Tome_I1_Part_2.txt" and"../data/txt/Bxl_1895_Tome_I1_Part_3.txt" and"../data/txt/Bxl_1895_Tome_I1_Part_4.txt" and"../data/txt/Bxl_1895_Tome_I1_Part_5.txt" and"../data/txt/Bxl_1895_Tome_I1_Part_6.txt" and"../data/txt/Bxl_1895_Tome_I1_Part_7.txt" and"../data/txt/Bxl_1895_Tome_I1_Part_8.txt", encoding='utf-8').read()[:n] ###Code Messieurs apparait 17 fois dans le corpus M. le Bourgmestre apparait 13 fois dans le corpus Bourgmestre apparait 10 fois dans le corpus Richald apparait 10 fois dans le corpus Lepage apparait 9 fois dans le corpus Budget apparait 9 fois dans le corpus Monsieur le Bourgmestre apparait 7 fois dans le corpus q u i apparait 7 fois dans le corpus Legs apparait 6 fois dans le corpus Goffin apparait 5 fois dans le corpus M. Delannoy apparait 5 fois dans le corpus Heyvaert apparait 5 fois dans le corpus Saint-Géry apparait 5 fois dans le corpus Dépôt apparait 5 fois dans le corpus M. l'Echevin apparait 4 fois dans le corpus Echevin Janssen apparait 4 fois dans le corpus Lemonnier apparait 4 fois dans le corpus De Potter apparait 4 fois dans le corpus Furnémont apparait 3 fois dans le corpus SUPPLÉMENTAIRE apparait 3 fois dans le corpus Prolongement apparait 3 fois dans le corpus Gouverneur apparait 2 fois dans le corpus Le Gouverneur apparait 2 fois dans le corpus Veuillez apparait 2 fois dans le corpus De Mot apparait 2 fois dans le corpus M. PEchevin Janssen apparait 2 fois dans le corpus M. Heyvaert apparait 2 fois dans le corpus Delannoy apparait 2 fois dans le corpus Vandendorpe apparait 2 fois dans le corpus Vanhaelen apparait 2 fois dans le corpus Van Dyck apparait 2 fois dans le corpus Démission apparait 2 fois dans le corpus RELEVÉ apparait 2 fois dans le corpus Remercîments apparait 2 fois dans le corpus LECOMTE apparait 2 fois dans le corpus M. Godefroy apparait 1 fois dans le corpus M. Furnémont apparait 1 fois dans le corpus M. l'Echevin Janssen apparait 1 fois dans le corpus Bourgmestre de Molenbeek-Saint-Jean apparait 1 fois dans le corpus Installations apparait 1 fois dans le corpus Monsieur apparait 1 fois dans le corpus lemps apparait 1 fois dans le corpus M. le Gouverneur apparait 1 fois dans le corpus M. Lemonnier apparait 1 fois dans le corpus M. Allard apparait 1 fois dans le corpus De Bruyn apparait 1 fois dans le corpus Pentièreté apparait 1 fois dans le corpus De Borchgrave apparait 1 fois dans le corpus Nérinckx apparait 1 fois dans le corpus Le Ministre apparait 1 fois dans le corpus ###Output _____no_output_____ ###Markdown Année : 1914 ###Code # Charger le texte n=900000 text = open("../data/txt/Bxl_1914_Tome_II1_Part_1.txt" and "../data/txt/Bxl_1914_Tome_II1_Part_2.txt", encoding='utf-8').read()[:n] ###Output _____no_output_____ ###Markdown Bourgmestre apparait 26 fois dans le corpusReprise apparait 25 fois dans le corpusLe Secrétaire apparait 16 fois dans le corpusEchevins apparait 12 fois dans le corpusq u i apparait 9 fois dans le corpusM. V A U apparait 9 fois dans le corpusLe Bourgmestre apparait 7 fois dans le corpusSecrétaire apparait 4 fois dans le corpusVAUTHIER apparait 4 fois dans le corpusParfaite apparait 4 fois dans le corpusr u e apparait 4 fois dans le corpusq u ' i apparait 4 fois dans le corpusLouise apparait 3 fois dans le corpusj u i apparait 3 fois dans le corpusFourniture apparait 3 fois dans le corpusReine apparait 3 fois dans le corpusAttendu apparait 3 fois dans le corpusLEMONNIER apparait 3 fois dans le corpusFunck apparait 3 fois dans le corpusu i v apparait 3 fois dans le corpusJoseph-Stevens apparait 2 fois dans le corpusS. M apparait 2 fois dans le corpusi v i apparait 2 fois dans le corpusDate apparait 2 fois dans le corpusLingerie apparait 2 fois dans le corpusM. V A U T H I E B apparait 2 fois dans le corpusPrésident apparait 2 fois dans le corpusq u e apparait 2 fois dans le corpusu v e apparait 2 fois dans le corpusMaintien apparait 2 fois dans le corpusTirage apparait 2 fois dans le corpuslegs Godefroy apparait 1 fois dans le corpusNélis apparait 1 fois dans le corpusL E COLLÈGE apparait 1 fois dans le corpusLord-maire apparait 1 fois dans le corpusLauweryns apparait 1 fois dans le corpusP. Verdoot apparait 1 fois dans le corpusI. Legrand apparait 1 fois dans le corpusMonnaie apparait 1 fois dans le corpusThéo Mahy apparait 1 fois dans le corpusLe Secretaire apparait 1 fois dans le corpusM. le Bourgmestre apparait 1 fois dans le corpusV I I apparait 1 fois dans le corpusAdjudication apparait 1 fois dans le corpusAnnée apparait 1 fois dans le corpusr i e me apparait 1 fois dans le corpusg u e r r e apparait 1 fois dans le corpusu i t apparait 1 fois dans le corpusLe Gouverneur apparait 1 fois dans le corpusLe Gouverneur militaire apparait 1 fois dans le corpus ###Code #### Année : 1950 ###Output _____no_output_____ ###Markdown Charger le texten=900000text = open("../data/txt/Bxl_1950_Tome_I_Part_1.txt" and"../data/txt/Bxl_1950_Tome_I_Part_2.txt" and"../data/txt/Bxl_1950_Tome_I_Part_3.txt" and"../data/txt/Bxl_1950_Tome_I_Part_4.txt" and"../data/txt/Bxl_1950_Tome_I_Part_5.txt" and"../data/txt/Bxl_1950_Tome_I_Part_6.txt" and"../data/txt/Bxl_1950_Tome_I_Part_7.txt" and"../data/txt/Bxl_1950_Tome_I_Part_8.txt" and"../data/txt/Bxl_1950_Tome_II_Part_1.txt" and"../data/txt/Bxl_1950_Tome_II_Part_2.txt" and"../data/txt/Bxl_1950_Tome_II_Part_3.txt" and"../data/txt/Bxl_1950_Tome_II_Part_4.txt" and"../data/txt/Bxl_1950_Tome_II_Part_5.txt" and"../data/txt/Bxl_1950_Tome_II_Part_6.txt" and"../data/txt/Bxl_1950_Tome_II_Part_7.txt" and"../data/txt/Bxl_1950_Tome_II_Part_8.txt" and "../data/txt/Bxl_1950_Tome_II_Part_9.txt" and"../data/txt/Bxl_1950_Tome_II_Part_10.txt" and"../data/txt/Bxl_1950_Tome_III_Part_1.txt" and"../data/txt/Bxl_1950_Tome_III_Part_2.txt" and"../data/txt/Bxl_1950_Tome_III_Part_3.txt" and"../data/txt/Bxl_1950_Tome_III_Part_4.txt" and"../data/txt/Bxl_1950_Tome_III_Part_5.txt" and"../data/txt/Bxl_1950_Tome_III_Part_6.txt" and"../data/txt/Bxl_1950_Tome_III_Part_7.txt", encoding='utf-8').read()[:n] ###Code Brunfaut apparait 22 fois dans le corpus Prorogation apparait 16 fois dans le corpus Deboeck apparait 13 fois dans le corpus Thonet apparait 13 fois dans le corpus Démission apparait 11 fois dans le corpus Hermanus apparait 11 fois dans le corpus Jean apparait 10 fois dans le corpus Lambert apparait 9 fois dans le corpus Joseph apparait 8 fois dans le corpus q u e apparait 8 fois dans le corpus Jeanne apparait 7 fois dans le corpus de M apparait 7 fois dans le corpus Schmitz apparait 7 fois dans le corpus Budget apparait 7 fois dans le corpus Octroi apparait 6 fois dans le corpus Albert apparait 6 fois dans le corpus Charles apparait 6 fois dans le corpus Joseph Marchai apparait 6 fois dans le corpus Jean-Baptiste apparait 6 fois dans le corpus Genot apparait 6 fois dans le corpus Paul apparait 5 fois dans le corpus René apparait 5 fois dans le corpus Robert apparait 5 fois dans le corpus i f i c a t i apparait 5 fois dans le corpus Stuckens apparait 5 fois dans le corpus Subside apparait 5 fois dans le corpus Jules apparait 4 fois dans le corpus Charles Buis apparait 4 fois dans le corpus u v e apparait 4 fois dans le corpus Maurice apparait 4 fois dans le corpus Emile apparait 4 fois dans le corpus Jacques apparait 4 fois dans le corpus Marcel apparait 4 fois dans le corpus Louise apparait 4 fois dans le corpus Marcelle apparait 4 fois dans le corpus Léopold apparait 4 fois dans le corpus Vermeire apparait 4 fois dans le corpus Gaston apparait 3 fois dans le corpus Brugmann apparait 3 fois dans le corpus Marie apparait 3 fois dans le corpus Fondation B apparait 3 fois dans le corpus Eastman apparait 3 fois dans le corpus Grauw apparait 3 fois dans le corpus M. Schmitz apparait 3 fois dans le corpus r i f apparait 3 fois dans le corpus Gabrielle apparait 3 fois dans le corpus Edmond apparait 3 fois dans le corpus François apparait 3 fois dans le corpus Baron Lambert apparait 3 fois dans le corpus Simone apparait 3 fois dans le corpus ###Output _____no_output_____ ###Markdown Année : 1954 ###Code # Charger le texte n=900000 text = open("../data/txt/Bxl_1954_Tome_I_Part_1.txt" and "../data/txt/Bxl_1954_Tome_I_Part_2.txt" and "../data/txt/Bxl_1954_Tome_I_Part_3.txt" and "../data/txt/Bxl_1954_Tome_I_Part_4.txt" and "../data/txt/Bxl_1954_Tome_I_Part_5.txt" and "../data/txt/Bxl_1954_Tome_I_Part_6.txt" and "../data/txt/Bxl_1954_Tome_I_Part_7.txt" and "../data/txt/Bxl_1954_Tome_I_Part_8.txt" and "../data/txt/Bxl_1954_Tome_I_Part_10.txt" and "../data/txt/Bxl_1954_Tome_II_Part_1.txt" and "../data/txt/Bxl_1954_Tome_II_Part_2.txt" and "../data/txt/Bxl_1954_Tome_II_Part_3.txt" and "../data/txt/Bxl_1954_Tome_II_Part_4.txt" and "../data/txt/Bxl_1954_Tome_II_Part_5.txt" and "../data/txt/Bxl_1954_Tome_II_Part_6.txt" and "../data/txt/Bxl_1954_Tome_II_Part_7.txt" and "../data/txt/Bxl_1954_Tome_II_Part_8.txt" and "../data/txt/Bxl_1954_Tome_II_Part_9.txt" and "../data/txt/Bxl_1954_Tome_II_Part_10.txt" and "../data/txt/Bxl_1954_Tome_III_Part_1.txt" and "../data/txt/Bxl_1954_Tome_III_Part_2.txt" and "../data/txt/Bxl_1954_Tome_III_Part_3.txt" and "../data/txt/Bxl_1954_Tome_III_Part_4.txt" and "../data/txt/Bxl_1954_Tome_III_Part_5.txt" and "../data/txt/Bxl_1954_Tome_III_Part_6.txt" and "../data/txt/Bxl_1954_Tome_III_Part_7.txt" and "../data/txt/Bxl_1954_Tome_III_Part_8.txt" and "../data/txt/Bxl_1954_Tome_III_Part_9.txt", encoding='utf-8').read()[:n] ###Output _____no_output_____ ###Markdown q u e apparait 27 fois dans le corpusSchmitz apparait 13 fois dans le corpusi f i c a t i apparait 10 fois dans le corpusSchalckens apparait 8 fois dans le corpusde M apparait 7 fois dans le corpusAvella apparait 7 fois dans le corpusBrunfaut apparait 5 fois dans le corpusJanssens apparait 5 fois dans le corpusScheyven apparait 5 fois dans le corpusV I I I apparait 5 fois dans le corpusRecettes apparait 5 fois dans le corpusJoseph Marchai apparait 5 fois dans le corpusVan Leynseele apparait 5 fois dans le corpusv o i r i e apparait 4 fois dans le corpusq u i apparait 4 fois dans le corpusLeynseele apparait 4 fois dans le corpusRoyal apparait 4 fois dans le corpusSwolfs apparait 4 fois dans le corpusCharles Buis apparait 4 fois dans le corpusJean apparait 4 fois dans le corpusRoger apparait 4 fois dans le corpusBudget apparait 3 fois dans le corpusEglise Notre-Dame apparait 3 fois dans le corpusDachsbeck apparait 3 fois dans le corpusJeanne apparait 3 fois dans le corpusGrauw apparait 3 fois dans le corpusXavier Carton de Wiart apparait 3 fois dans le corpusChapitre III apparait 3 fois dans le corpusEchevin apparait 3 fois dans le corpusé t a i r e apparait 2 fois dans le corpusLéon apparait 2 fois dans le corpusEglise Saint-Nicolas apparait 2 fois dans le corpusLouise apparait 2 fois dans le corpusi v e r apparait 2 fois dans le corpusI I I apparait 2 fois dans le corpusu f f apparait 2 fois dans le corpusu i t e apparait 2 fois dans le corpuss u r apparait 2 fois dans le corpusi f i c a t i o apparait 2 fois dans le corpusu v e apparait 2 fois dans le corpusBrouhon apparait 2 fois dans le corpusAmendement apparait 2 fois dans le corpusQ u e apparait 2 fois dans le corpusKarel Bogaerd apparait 2 fois dans le corpusS O C I E apparait 2 fois dans le corpusEglise Sainte-Elisabeth apparait 2 fois dans le corpusi f i é du tram 12 apparait 2 fois dans le corpusp r é apparait 2 fois dans le corpusf i n i apparait 2 fois dans le corpusFrédéric apparait 2 fois dans le corpus ###Code #### Année : 1955 ###Output _____no_output_____ ###Markdown Charger le texten=900000text = open("../data/txt/Bxl_1955_Tome_I_Part_1.txt" and"../data/txt/Bxl_1955_Tome_I_Part_2.txt" and"../data/txt/Bxl_1955_Tome_I_Part_3.txt" and"../data/txt/Bxl_1955_Tome_I_Part_4.txt" and"../data/txt/Bxl_1955_Tome_I_Part_5.txt" and"../data/txt/Bxl_1955_Tome_I_Part_6.txt" and"../data/txt/Bxl_1955_Tome_I_Part_7.txt" and"../data/txt/Bxl_1955_Tome_I_Part_8.txt" and"../data/txt/Bxl_1955_Tome_I_Part_10.txt" and"../data/txt/Bxl_1955_Tome_II1_Part_1.txt" and"../data/txt/Bxl_1955_Tome_II1_Part_2.txt" and"../data/txt/Bxl_1955_Tome_II1_Part_3.txt" and"../data/txt/Bxl_1955_Tome_II1_Part_4.txt" and"../data/txt/Bxl_1955_Tome_II1_Part_5.txt" and"../data/txt/Bxl_1955_Tome_II1_Part_6.txt" and"../data/txt/Bxl_1955_Tome_II1_Part_7.txt" and "../data/txt/Bxl_1955_Tome_II1_Part_8.txt" and"../data/txt/Bxl_1955_Tome_II1_Part_9.txt" and"../data/txt/Bxl_1955_Tome_II1_Part_10.txt" and"../data/txt/Bxl_1955_Tome_II2_Part_1.txt" and"../data/txt/Bxl_1955_Tome_II2_Part_2.txt" and"../data/txt/Bxl_1955_Tome_II2_Part_3.txt" and"../data/txt/Bxl_1955_Tome_II2_Part_4.txt" and"../data/txt/Bxl_1955_Tome_II2_Part_5.txt" and"../data/txt/Bxl_1955_Tome_II2_Part_6.txt" and"../data/txt/Bxl_1955_Tome_II2_Part_7.txt" and"../data/txt/Bxl_1955_Tome_II2_Part_8.txt" and"../data/txt/Bxl_1955_Tome_II2_Part_9.txt", encoding='utf-8').read()[:n] ###Code Brunfaut apparait 55 fois dans le corpus Schalckens apparait 18 fois dans le corpus Recettes apparait 17 fois dans le corpus Brouhon apparait 16 fois dans le corpus Schmitz apparait 15 fois dans le corpus Avella apparait 11 fois dans le corpus Budget apparait 10 fois dans le corpus Répond apparait 8 fois dans le corpus Jean apparait 7 fois dans le corpus Romaine apparait 7 fois dans le corpus Saulnier apparait 7 fois dans le corpus Chapitre VII apparait 6 fois dans le corpus Swolfs apparait 6 fois dans le corpus Chapitre IV apparait 6 fois dans le corpus Scheyven apparait 6 fois dans le corpus Octroi apparait 5 fois dans le corpus Emile André apparait 5 fois dans le corpus Van Leynseele apparait 5 fois dans le corpus Echevin apparait 5 fois dans le corpus Agréation apparait 4 fois dans le corpus Paul apparait 4 fois dans le corpus Constantin Meunier apparait 4 fois dans le corpus Grauw apparait 4 fois dans le corpus Jules apparait 4 fois dans le corpus Pierson apparait 4 fois dans le corpus Henri apparait 3 fois dans le corpus Funck apparait 3 fois dans le corpus Van Calck-Stevens apparait 3 fois dans le corpus Simone apparait 3 fois dans le corpus Chapitre V. apparait 3 fois dans le corpus Chapitre III apparait 3 fois dans le corpus Guillaume apparait 3 fois dans le corpus Marguerite apparait 3 fois dans le corpus Louise apparait 3 fois dans le corpus Léon Lepage apparait 3 fois dans le corpus René apparait 3 fois dans le corpus Laboureur apparait 3 fois dans le corpus Greef apparait 3 fois dans le corpus Bonification apparait 2 fois dans le corpus Wets apparait 2 fois dans le corpus Aron-Samdam apparait 2 fois dans le corpus Théophile Janssens apparait 2 fois dans le corpus Louis apparait 2 fois dans le corpus Bischoffsheim apparait 2 fois dans le corpus Marcelle apparait 2 fois dans le corpus Joseph apparait 2 fois dans le corpus Marcel apparait 2 fois dans le corpus Section III apparait 2 fois dans le corpus Recoupe apparait 2 fois dans le corpus Adolphe Max apparait 2 fois dans le corpus ###Output _____no_output_____ ###Markdown 5. Word Embeddings : le modèle Word2Vec ###Code %%time model = Word2Vec( corpus, vector_size=32, window=6, # La taille du "contexte", ici 6 mots avant et après le mot observé min_count=8, # On ignore les mots qui n'apparaissent pas au moins 8 fois dans le corpus workers=4, epochs=5 ) ###Output _____no_output_____ ###Markdown 5.1. Chercher les mots les plus proches d'un terme donné – à l’aide du syllabus : Entités repérées dans le syllabus : ###Code Walthère Frère-Orban -> Politicien libéral Pierre Van Humbeéck -> Politicien libéral, Directeur du Ministère de l'Instruction Publique Louis Morichar -> Politicien libéral Leo Collard -> Politicien socialiste Charles Woeste -> Politicien catholique Victor Jacobs -> Politicien catholique Jules Malou -> Politicien catholique Pierre Harmel -> Politicien catholique Ministère de l'instruction publique -> Innovation libérale Comissions mixtes -> Innovation catholique Loi Schollaert -> Loi votée par le camps catholique rendant les cours de religions obligatoires Loi Buset de Schryver -> Loi votée par le camps libéral laissant le libre choix entre les cours de religion catholique et le cours de moral non confessionnelle ###Output _____no_output_____ ###Markdown model.wv.most_similar("orban", topn=20) Objectif : Walthère Frère-Orban Pas de résultat. model.wv.most_similar("humbeeck", topn=20) Objectif : Pierre Van Humbeéck ###Code [('doornick', 0.9406827688217163), ('verstraeten', 0.9402799606323242), ('mastraeten', 0.9401127099990845), ('froidmont', 0.9333361983299255), ('gaver', 0.9248977899551392), ('doucet', 0.9242807030677795), ('dansaert', 0.9188680648803711), ('seghers', 0.9188074469566345), ('dermeeren', 0.9159582853317261), ('kaieman', 0.9062676429748535), ('vandermeeren', 0.903771162033081), ('otlet', 0.9034638404846191), ('michiels', 0.8942472338676453), ('bartels', 0.8919656276702881), ('van_humbeeck', 0.8919497728347778), # Argument en la faveur de "Pierre Van Humbeéck" ? ('derlinden', 0.8706073760986328), ('linden', 0.8660638928413391), ('bisschoffsheim', 0.8580318689346313), ('bischoffsheim', 0.8561493158340454), ('mersman', 0.8548857569694519)] ###Output _____no_output_____ ###Markdown model.wv.most_similar("morichar", topn=20) Objectif : Louis Morichar Pas de résultat. model.wv.most_similar("collard", topn=20) Objectif : Leo Collard Pas de résultat. model.wv.most_similar("leo", topn=20) Objectif : Leo Collard Pas de résultat. model.wv.most_similar("woeste", topn=20) Objectif : Charles Woeste Pas de résultat. model.wv.most_similar("jacobs", topn=20) Objectif : Victor Jacobs ###Code [('lavallee', 0.9474549293518066), ('goffart', 0.9459542036056519), ('vandermeeren', 0.9349172711372375), ('verstraeten', 0.91069096326828), ('bartels', 0.9046281576156616), ('cappellemans', 0.9023787379264832), ('otlet', 0.8928908109664917), ('depaire', 0.8907407522201538), ('mastraeten', 0.8871728181838989), ('fontainas', 0.8855920433998108), ('michiels', 0.8850648999214172), ('kaieman', 0.8849473595619202), ('ranwet', 0.8847635984420776), ('walter', 0.8824782967567444), ('maskens', 0.8805741667747498), ('veldekens', 0.8798433542251587), ('bischoffsheim', 0.8796603679656982), ('linden', 0.874823808670044), ('hemptinne', 0.8736701011657715), ('orts', 0.8707718253135681)] ###Output _____no_output_____ ###Markdown model.wv.most_similar("malou", topn=20) Objectif : Jules Malou Pas de résultat. model.wv.most_similar("harmel", topn=20) Objectif : Pierre Harmel Pas de résultat. model.wv.most_similar("schollaert", topn=20) Objectif : Loi Schollaert Pas de résultat. model.wv.most_similar("buset", topn=20) Objectif : Loi Buset de Schryver Pas de résultat. model.wv.most_similar("schryver", topn=20) Objectif : Loi Buset de Schryver Pas de résultat. ###Code ### 5.2. Chercher les mots les plus proches d'un terme donné – à l’aide des entités retrouvées via SpaCy : ###Output _____no_output_____ ###Markdown model.wv.most_similar("vanderstraeten", topn=20) résultats non pertinents ###Code [('vandenbranden', 0.932869553565979), ('boon', 0.9312206506729126), ('faes', 0.9304192066192627), ('schacrbeek', 0.9294443726539612), ('muller', 0.9281514883041382), ('perot', 0.9264877438545227), ('lacken', 0.9257716536521912), ('saint_-_laurent', 0.9245455265045166), ('samaritaine', 0.9195286631584167), ('lyssens', 0.9177162051200867), ('marronnier', 0.9159501791000366), ('bollebeek', 0.9158881902694702), ('teyssens', 0.9155896902084351), ('demeus', 0.9148169159889221), ('faisans', 0.9140084385871887), ('dewild', 0.9130191206932068), ('steen', 0.9116005897521973), ('sehaerbeek', 0.9110332131385803), ('>>_ledoux', 0.909467339515686), ('>>_coyique', 0.9061734080314636)] ###Output _____no_output_____ ###Markdown model.wv.most_similar("vauthier", topn=20) ###Code [('lenglentier', 0.8316638469696045), ('ghendt', 0.822564959526062), ('onze_paroisses', 0.8090373873710632), ('anciens_etudiants', 0.8048951625823975), -> JACKPOT ! ('grande_harmonie', 0.8026869893074036), ('audiences', 0.7945123314857483), ('societe_union', 0.7901893854141235), ('grande_kermesse', 0.7859131693840027), ('romberg', 0.7824746370315552), ('plus_eloignee', 0.7807037234306335), ('wyngaerd', 0.774850606918335), ('echcvins', 0.7740960121154785), ('justices', 0.7740155458450317), ('territoires', 0.7699812650680542), ('eglise_evangelique', 0.7697824239730835), ('journaliere', 0.7668567895889282), ('se_poursuivent', 0.7664451599121094), ('puysegur', 0.7661875486373901), ('quatre_lots', 0.7661452293395996), ('wemmel', 0.765738308429718)] ###Output _____no_output_____ ###Markdown model.wv.most_similar("anciens_etudiants", topn=20) ###Code [('onze_paroisses', 0.8631004691123962), ('nouveaux_locaux', 0.8509218096733093), ('morgendstar', 0.8457364439964294), ('territoires', 0.8436083793640137), ('petards', 0.8406855463981628), ('agricole', 0.8392044305801392), ('filles_aveugles', 0.8327329754829407), ('victoires_au_sablon', 0.8298665285110474), ('diverses_fournitures', 0.8298200368881226), ('gardiennes', 0.8269129395484924), ('appartiendront', 0.8241409063339233), ('eglise_evangelique', 0.8238897323608398), ('fusees_ou_autres_artifices', 0.8231794834136963), ('sainte_-_claire', 0.8220885992050171), ('minervalia', 0.8217548727989197), ('subsistances', 0.8210970163345337), ('sera_publiee', 0.8209513425827026), ('herniaires', 0.8192631602287292), ('hospices_civils', 0.8186041116714478), ('besoins_reels', 0.818183422088623)] ###Output _____no_output_____ ###Markdown model.wv.most_similar("allard", topn=20) ###Code [('visschers', 0.856752872467041), ('gerard', 0.8366480469703674), ('corneille', 0.8342634439468384), ('leemans', 0.8328081369400024), ('keilig', 0.8138514757156372), ('fetis', 0.8133304119110107), ('haeck', 0.8093790411949158), ('jeanbaptiste', 0.8092698454856873), ('vanhumbeeck', 0.8072836995124817), -> JACKPOT ! ('benoit', 0.8057526350021362), ('scghers', 0.8039580583572388), ('duray', 0.8037801384925842), ('notaire_toussaint', 0.8036455512046814), ('lauters', 0.7997519969940186), ('noel', 0.7993519902229309), ('claessens', 0.7960987091064453), ('crocq', 0.7956347465515137), ('inculpe', 0.7947080135345459), ('raphael', 0.794048547744751), ('verlinden', 0.7930324077606201)] ###Output _____no_output_____ ###Markdown model.wv.most_similar("veldekens", topn=20) ###Code [('spaak', 0.9856860041618347), ('hauwaerts', 0.9799227118492126), ('depaire', 0.9754725098609924), ('maskens', 0.9691603779792786), ('otlet', 0.9617078900337219), ('delloye', 0.959078311920166), ('cappellemans', 0.9585481286048889), ('vandermeeren', 0.9473066329956055), ('goffart', 0.9464858770370483), ('walter', 0.9408642053604126), ('lavallee', 0.9378466606140137), ('jacobs', 0.9314154982566833), ('capouillet', 0.9304124712944031), ('verstraeten', 0.9269962310791016), ('mersman', 0.9216134548187256), ('funck', 0.9190061688423157), ('bischoffsheim', 0.9172085523605347), -> JACKPOT ! ('kaieman', 0.9155933260917664), ('tielemans', 0.9141072034835815), ('brugmann', 0.9132271409034729)] ###Output _____no_output_____ ###Markdown model.wv.most_similar("depaire", topn=20) ###Code [('veldekens', 0.9754725098609924), -> JACKPOT ! ('spaak', 0.972589373588562), ('maskens', 0.9692990779876709), ('hauwaerts', 0.9641202688217163), ('walter', 0.9586836099624634), ('otlet', 0.9522045850753784), ('tielemans', 0.9444215893745422), ('goffart', 0.9423696398735046), ('lavallee', 0.9351357221603394), ('delloye', 0.9343769550323486), ('mersman', 0.9314110279083252), ('cappellemans', 0.9303078651428223), ('jacobs', 0.9285451173782349), ('bischoffsheim', 0.9284618496894836), -> JACKPOT ! ('vandermeeren', 0.9226492643356323), ('orts', 0.9224843382835388), ('verstraeten', 0.9194663166999817), ('funck', 0.9139727354049683), ('brugmann', 0.9139671921730042), ('kaieman', 0.9122623801231384)] ###Output _____no_output_____ ###Markdown 5.3. Calculer la similarité entre deux termes A l'aide des entités retrouvées via SpaCy : ###Code model.wv.similarity("guillery", "ecole") ###Output _____no_output_____ ###Markdown 0.009244829 ###Code model.wv.similarity("guillery", "etudiant") ###Output _____no_output_____ ###Markdown 0.69175315 ###Code model.wv.similarity("guillery", "education") ###Output _____no_output_____ ###Markdown 0.030845288 ###Code model.wv.similarity("doucet", "ecole") ###Output _____no_output_____ ###Markdown -0.09731162 ###Code model.wv.similarity("doucet", "etudiant") ###Output _____no_output_____ ###Markdown 0.4480042 ###Code model.wv.similarity("doucet", "education") ###Output _____no_output_____ ###Markdown -0.1311168 ###Code model.wv.similarity("vauthier", "ecole") ###Output _____no_output_____ ###Markdown 0.1834881 ###Code model.wv.similarity("vauthier", "education") ###Output _____no_output_____ ###Markdown 0.16648027 ###Code model.wv.similarity("vauthier", "etudiant") ###Output _____no_output_____ ###Markdown 0.4680469 ###Code model.wv.similarity("veldekens", "ecole") ###Output _____no_output_____ ###Markdown -0.13937452 ###Code model.wv.similarity("veldekens", "education") ###Output _____no_output_____ ###Markdown -0.09900504 ###Code model.wv.similarity("veldekens", "etudiant") ###Output _____no_output_____ ###Markdown 0.42229372 ###Code model.wv.similarity("mennessier", "ecole") ###Output _____no_output_____ ###Markdown Absence de résultat. ###Code model.wv.similarity("depaire", "ecole") ###Output _____no_output_____ ###Markdown -0.12580948 ###Code model.wv.similarity("jacobs", "ecole") ###Output _____no_output_____ ###Markdown -0.021588441 ###Code model.wv.similarity("jacobs", "etudiant") ###Output _____no_output_____ ###Markdown 0.27547848 ###Code model.wv.similarity("jacobs", "loi") ###Output _____no_output_____ ###Markdown -0.33412132 ###Code ##### A l'aide des termes associés par la fonction most_similar aux entités retrouvées : ###Output _____no_output_____ ###Markdown model.wv.similarity("bisschoffsheim", "ecole") ###Code -0.076267675 ###Output _____no_output_____ ###Markdown model.wv.similarity("veldekens", "jacobs") ###Code 0.8889205 ###Output _____no_output_____ ###Markdown Vérification de la technique a l'aide des sources historiques : ###Code model.wv.similarity("humbeeck", "ecole") ###Output _____no_output_____ ###Markdown -0.10660429 ###Code model.wv.similarity("humbeeck", "instruction") ###Output _____no_output_____ ###Markdown 0.004246 ###Code model.wv.similarity("commissions", "mixtes") ###Output _____no_output_____ ###Markdown 0.31646422 ###Code model.wv.similarity("commission", "mixte") ###Output _____no_output_____ ###Markdown -0.024145119 ###Code ## 5.4. Effectuer des recherches complexes à travers l'espace vectoriel #### 5.4.1. Retrouver des termes liés aux Guerres Scolaires ###Output _____no_output_____ ###Markdown print(model.wv.most_similar(positive=['guerre', 'ecole', 'scolaire'], negative=['france'])) ###Code [('athenee_royal', 0.7810238003730774), ('administ', 0.6935469508171082), ('ecole_primaire', 0.6822361946105957), ('amigo', 0.6652345061302185), ('article_155', 0.6577202081680298), ('administration_centrale', 0.6504440903663635), ('ou_section_electorale', 0.6328341960906982), ('exercice_1855', 0.6324663162231445), ('universite', 0.6295408606529236), ('abbe', 0.6285657286643982)] ###Output _____no_output_____ ###Markdown print(model.wv.most_similar(positive=['eglise', 'ecole', 'scolaire'], negative=['france'])) ###Code [('hopital_st_.-_jean', 0.7686442136764526), ('hopital', 0.7565287351608276), ('ancien_local', 0.7355120182037354), ('athenee_royal', 0.7090471982955933), ('hopital_st_.-_pierre', 0.7057067155838013), ('ecole_primaire', 0.696416437625885), ('eglise_collegiale', 0.6907951235771179), ('etang', 0.6860118508338928), ('ministration', 0.6809578537940979), ('abbe', 0.6774770021438599)] ###Output _____no_output_____ ###Markdown 5.4.2. Retrouver des entités repérées dans les sources historiques mais non trouvées par les techniques utilisées précédement A la recherche de Charles Woeste ###Code print(model.wv.most_similar(positive=['charles', 'parti', 'catholique', 'ecole', 'scolaire'], negative=['france'])) ###Output _____no_output_____ ###Markdown [('acte_precite', 0.8050993084907532), ('elat', 0.8028843402862549), ('exploitant', 0.7939587831497192), ('archiviste', 0.7737072110176086), ('ancien_batiment', 0.7680568099021912), ('administ', 0.7614169716835022), ('espace_reserve', 0.7568286657333374), ('concessionnaire_devra_tenir_constamment', 0.7564624547958374), ('indigne', 0.754558801651001), ('vanhumbeeck', 0.7539466023445129)] ###Code print(model.wv.most_similar(positive=['parti', 'ouvrier', 'belge'], negative=['france'])) ###Output _____no_output_____ ###Markdown [('sacrifice', 0.7807803153991699), ('citoyen', 0.7515919208526611), ('argument', 0.7479140162467957), ('plaisir', 0.7272078990936279), ('simulacre', 0.7221975922584534), ('bienfait', 0.7211918234825134), ('hommage', 0.7152751684188843), ('remede', 0.7073590755462646), ('permanent', 0.7035471200942993), ('bill', 0.697691023349762)] ###Code #### 5.4.3. Vérifier si les entités retrouvées à l'aide des codes ont bien un rapport avec les Guerres Scolaires ##### On a retrouvé "Doucet" ! ###Output _____no_output_____ ###Markdown print(model.wv.most_similar(positive=['charles', 'parti', 'catholique', 'ecole', 'scolaire', 'instruction'], negative=['france'])) ###Code [('acte_precite', 0.8557972311973572), ('administ', 0.8055994510650635), ('adoucissement', 0.7933040261268616), ('execution_complete', 0.786177396774292), ('echevin_doucet', 0.785129964351654), ('enseignement_professionnel', 0.7780281901359558), ('ancien_batiment', 0.7780001163482666), ('abbe', 0.762891411781311), ('elat', 0.7548388838768005), ('exploitant', 0.7537887692451477)] ###Output _____no_output_____ ###Markdown print(model.wv.most_similar(positive=['doucet', 'jacobs', 'ecole', 'scolaire'], negative=['france'])) ###Code [('doornick', 0.8798298239707947), ('mastraeten', 0.8599655628204346), ('bisschoffsheim', 0.835128903388977), ('froidmont', 0.8349340558052063), ('michiels', 0.8305588960647583), ('dermeeren', 0.828158438205719), ('humbeeck', 0.825933575630188), ('gaver', 0.8250044584274292), ('capouillet', 0.8232063055038452), ('cutsem', 0.8186049461364746)] ###Output _____no_output_____ ###Markdown Silences notables Où est Charles Woeste ? ###Code print(model.wv.most_similar(positive=['woeste', 'ecole', 'scolaire'], negative=['france'])) model.wv.most_similar("woeste", topn=20) model.wv.similarity("woeste", "ecole") ###Output _____no_output_____ ###Markdown Qu'en est-il de la loi Schollaert ? ###Code model.wv.similarity("schollaert", "ecole") model.wv.most_similar("schollaert", topn=20) print(model.wv.most_similar(positive=['schollaert', 'ecole', 'scolaire'], negative=['france'])) ###Output _____no_output_____ ###Markdown Il n'a en effet pas été possible de retrouver de nombreux acteurs via cette technique, comme Pierre Harmel, Leo Collard, la loi Buset Deschcryver... ###Code # 6. Pousser les recherches plus loin ? ## 6.1. Conjugaison de techniques : ###Output _____no_output_____ ###Markdown print(model.wv.most_similar(positive=['doucet', 'jacobs', 'ecole', 'scolaire'], negative=['france'])) model.wv.most_similar("mastraeten", topn=20) ###Code [('vandermeeren', 0.9610984921455383), ('michiels', 0.960659921169281), ('doucet', 0.9586436152458191), ('gaver', 0.9459365010261536), ('doornick', 0.9345253705978394), ('froidmont', 0.9285418391227722), ('capouillet', 0.9243782162666321), ('verstraeten', 0.9227901101112366), ('trumper', 0.9200323224067688), ('kaieman', 0.9135629534721375), ('jacobs', 0.9111238718032837), -> JACKPOT ! ('bisschoffsheim', 0.9092094898223877), -> JACKPOT ! ('humbeeck', 0.9062764644622803), -> JACKPOT ! ('van_gaver', 0.90577632188797), ('otlet', 0.9036937952041626), ('verhulst', 0.9029452204704285), ('dermeeren', 0.9028167128562927), ('van_humbeeck', 0.8976942300796509), -> JACKPOT ! ('bartels', 0.8967069387435913), ('hemptinne', 0.8890584707260132)] ###Output _____no_output_____ ###Markdown model.wv.most_similar("doornick", topn=20) ###Code [('gaver', 0.9364639520645142), ('mastraeten', 0.9345253705978394), ('humbeeck', 0.9333842992782593), -> JACKPOT ! ('bisschoffsheim', 0.9302659630775452), -> JACKPOT ! ('doucet', 0.9266588687896729), -> JACKPOT ! ('froidmont', 0.9248185157775879), ('michiels', 0.9132710099220276), ('vandermeeren', 0.9031562209129333), ('van_humbeeck', 0.9017702341079712), -> JACKPOT ! ('van_gaver', 0.9016945362091064), ('trumper', 0.8993610739707947), ('van_doornick', 0.8944776654243469), ('verhulst', 0.8866839408874512), ('dermeeren', 0.8852307796478271), ('cutsem', 0.8835622668266296), ('sachman', 0.8812598586082458), ('verstraeten', 0.8805428743362427), ('capouillet', 0.8796138167381287), ('kaieman', 0.8791176676750183), ('mosselman', 0.8775151968002319)] ###Output _____no_output_____ ###Markdown TP4 - Annexes ###Code Pour une navigation plus aisée, la structure des annexes est basée sur celle du rapport écrit. ###Output _____no_output_____ ###Markdown 3. Nuage de mots – wordcloud ###Code # Stopwords (Idem que dans s1) sw = stopwords.words("french") sw += ["les", "plus", "cette", "fait", "faire", "être", "deux", "comme", "dont", "tout", "ils", "bien", "sans", "peut", "tous", "après", "ainsi", "donc", "cet", "sous", "celle", "entre", "encore", "toutes", "pendant", "moins", "dire", "cela", "non", "faut", "trois", "aussi", "dit", "avoir", "doit", "contre", "depuis", "autres", "van", "het", "autre", "jusqu", "ville", "van het", "texte texte", "membres prennent" , "collège van", "octobre TEXTE", "het collège", "prennent part", "van hun", "frs", "membres", "décembre", "monsieur", "madame", "personnes", "nouveau", "approbation", "fonctions", "mesdames", "elles", "demande", "mars", "moyen", "messieurs", "très", "concerne", "voir", "mai", "juin", "juillet", "août", "septembre", "octobre", "novembre", "janvier", "février", "avril", "jour", "conditions", "extraordinaires", "charge", "raison", "pouvoir", "fourniture", "suite", "ans", "intervention", "proposition", "déjà", "vue", "sujet", "divers", "application", "relative", "leurs", "idem", "proposer", "partie", "diverses", "heures", "populations", "adoptées", "pris", "accord", "principe", "prix", "état", "crédit", "question", "certaines", "aide", "part", "taux", "services", "ordre", "aucune", "articles", "publics", "suit", "mise", "avant", "etat", "agit", "celui", "compte", "administration", "assistance", "société", "entretien", "cours", "total", "premier", "autorité", "territoire", "donner", "caisse", "communes", "echevin", "nominal", "arrêté", "cas", "toujours", "een", "voix", "actes", "lors", "adoption", "pays", "discussions", "monnaie", "effet", "émettre", "montant", "travail", "chaque", "augmentation", "délibération", "temps", "etc", "salaires", "droit", "fonds", "voté", "commission", "favorable", "recettes", "rien", "sollicite", "renouvellement", "public", "faite", "groupe", "alors", "terrain", "mois", "subsides", "population", "fin", "grand", "mandat", "taxes", "voie", "situation", "nouvelle", "règlement", "traitements", "avis", "devant", "lieu", "nom", "également", "fois", "toute", "texte", "personnel", "exercice", "rapport", "paiement", "chapitre", "normale", "nombre", "date", "suivant", "amendement", "décision", "qualité", "année", "rapports", "ceux", "projet", "parce", "que", "point", "somme", "nomination", "certains", "supérieure", "examen", "millions", "frais", "section", "adoptés", "mais", "où", "ou", "et", "donc", "or", "ni", "car", "que", "afin", "pour", "de", "sorte", "façon", "manière", "puisque", "parce", "comme", "vu", "étant", "fait", "autant", "même", "si", "quoi", "quoique", "bien", "tant", "tellement", "assez", "jusqu", "lorsque", "quand", "aussitôt", "sitôt", "dès", "après", "pendant", "dans", "alors", "sans", "aussi", "également", "plus", "cause", "à", "a", "du", "dû", "moins", "ce", "cet", "cette", "cependant", "néanmoins", "moins", "toutefois", "par", "tandis", "abord", "d'", "jadis", "quand", "je", "tu", "elle", "il", "nous", "vous", "elles", "ils", "on", "me", "te", "toi", "lui", "soi", "eux", "leur", "leurs", "sien", "siens", "siennes", "notre", "votre", "nos", "vos", "ce", "ça", "ceci", "cela", "celui", "celui-ci", "celui-là", "celle", "celle-ci", "celle-là", "ceux", "ceux-ci", "ceux-là", "aucun", "chacun", "lequel", "laquelle", "lesquelles", "personne", "quelque", "quelqu'", "quiconque", "rien", "tout", "aucune", "certes", "certains", "certaines", "beaucoup", "bon", "peu", "plupart", "plusieurs", "qui", "que", "quoi", "dont", "quiconque", "auquel", "auxquels", "auxquelles", "duquel", "desquels", "desquelles", "duquel", "deux", "trois", "un", "une", "quatre", "cinq", "six", "sept", "huit", "neuf", "dix", "ving", "trente", "quarante", "cinquante", "soixante", "septante", "cent", "nonante", "bien", "comme", "mal", "volontiers", "à nouveau", "à tort", "admirablement", "ainsi", "aussi", "comment", "debout", "également", "ensemble", "exprès", "mal", "mieux", "plutôt", "presque", "vite", "ici", "ailleurs", "alentour", "après", "arrière", "autour", "avant", "dedans", "dehors", "derrière", "dessous", "devant", "là", "loin", "où", "partout", "près", "y", "quelquefois", "parfois", "autrefois", "sitôt", "bientôt", "aussitôt", "tantôt", "alors", "après", "ensuite", "enfin", "d'abord", "tout à coup", "premièrement", "soudain", "aujourd'hui", "demain", "hier", "auparavant", "avant", "cependant", "déjà", "demain", "depuis", "désormais", "enfin", "ensuite", "jadis", "jamais", "maintenant", "puis", "quand", "souvent", "toujours", "tard", "tôt", "longuement", "quasi", "davantage", "plus", "moins", "ainsi", "assez", "aussi", "autant", "beaucoup", "combien", "encore", "environ", "fort", "guère", "presque", "peu", "si", "tant", "tellement", "tout", "très", "trop", "ainsi", "aussi", "pourtant", "néanmoins", "toutefois", "cependant", "en effet", "puis", "ensuite", "ailleurs","conséquent" "assurément", "certainement", "certes", "oui", "non", "peut-être", "précisément", "probablement", "sans", "volontiers", "vraiment", "ne", "guère", "jamais", "pas", "plus", "point", "rien" "francs", "publique", "conseil", "bruxelles", "collège", "communal", "bourgmestre", "service", "rue", "franc", "article", "francs", "budget", "taxe", "dépenses", "carton", "salle", "mises", "aménagement", "modifications", "quelques", "appel", "payer", "pont", "installation", "présents", "conclusions", "notamment", "instruction", "jours", "heure", "répond", "grande", "reste", "prennent", "mettre", "appel", "soir", "propriétés", "boulevard", "concession", "art", "modification", "produit", "bruxellois", "intérêts", "placement", "doivent", "moment", "disponibilité", "valeur", "ouverture", "secrétaire", "objet", "lecture", "peuvent", "conseiller", "chiffre", "politique", "nomme", "fabrique", "général", "pourquoi", "place", "titre", "séance", "installation", "bureau", "démission", "grandes", "appareils", "intérêt", "crois", "compétent", "majoration", "libre", "quant", "autorisation", "extraordinaire", "base", "président", "exploitation", "don", "matériel", "vote", "création", "pourrait", "ancien", "documents", #Stopwords: 1878 -> avant utilisation d'autres méthodes de recherches "courant", "voitures", "sépulture", "discussion", "sainte", "faisant", "émis", "bois", "impasse", "première", "senne", "obtenir", "cimetière", "excédant", "durant", "rues", "chef", "lettre", "parc", "faites", "demander", "pierre", "nature", "obtenu", "nécessaire", "loi", "hospices", "honorable", "quartier", "saint", "remboursable", "adjudication", "années", "faubourgs", "gaz", "hui", "ressources", "maison", "police", "celte", "propose", "aujourd", "terrains", "eau", "odrinaires", "fonction", "inclusivement", "gouvernement", "usine", "traitement", "hospice", "savoir", "entreprise", "pourra", "circulation", "pourra", "comités", "donne", "donné", "ordinaires", "allocation", "unanimité", "disposition", "remboursement", "obligations", "parcelle", "suivants", "paris", "porte", "celles", "décès", "ares", "fixé", "buis", "premiers", "hygiène", "mesures", "maisons", "présente", "exposition", "schaerbeek", "supplémentaire", "pauvres", "nommé", "pourront", "corps", "acte", "établir", "mètres", "location", "centiares", "crédits", "tableau", "secours", "concours", "hôpital", "fabriques", "emprunt", "justice", "concessions", "constructions", "premiers", "chez", "remboursables", "distribution", "prochaine", "sections", "eaux", "champ", "action", "partir", "honneur", "côté", "suppression", "hôtel", "nouveaux", "dernier", "capital", "numéros", "voter", "arts", "division", "permanente", "médaille", "électeurs", "contenance", "lot", "comptes", "pièces", "canal", "soumis", "cahier", "but", "vient", "royal", "moyenne", "revenu", "bienfaisance", "prendre", "conséquence", "devront", "mêmes", "capitaux", "système", "seulement", "langue", "jardin", "affaire", "action", "constructions", "voter", "nécessaire", "dernière", "accorder", "manoeuvres", "restauration", "domicile", "notaire", "convention", "orts", "musée", "midi", "sieur", "connaissance", "avances", "province", "travaux", "vente", "finances", "dépenses", "générale", "province", "pense", "expropriation", "entrepôt", "argent", "simple", "faits", "mètre", "totaux", "ordonnance", "approuver", "bénéfice", "matin", "secret", "occasion", "matin", "famille", "palais", "rendre", "agglomération", "acquisition", "hectares", "portée", "annuités", "exécuter", "points", "nécessaires", "pouvons", "intérieur", "nouvelle", "ordonnance", "suivantes", "outre", "observation", "prolongement", "chose", "amortissement", "immédiatement", "droits", "indemnité", "biens", "royale", "lois", "consommation", "industrielle", "infirmerie", "quinze", "compris", "loyer", "membre", "beaux", "matière", "centrale", "voici", "entrée", "égard", "exécution", "abbatoir", "administrateur", "effectuer", "suivante", "vers", "voies", "plan", "spéciale", "local", "nécessité", "hôpitaux", "propriété", "sociétés", "mesure", "liste", "marché", "terre", "chacune", "mis", "locaux", "centimes", "abords", "reconstruction", "spécial", "verbal", "plans", "fêtes", "contrat", "organisation", "explication", "remarquer", "temporaires", "indigents", "donation", "procès", "renvoi", "roi", "echevins", "actuellement", "dépôts", "habitants", "fera", "moyennes", "central", "observations", "dispositions", "conforméments" #Stopwords: 1881 -> après utilisation d'autres méthodes de recherches , "chevaux", "prise", "commune", "vauthier", "besoin", "différent", "ministre", "égouts", "présenter", "devait", "emploi", "longtemps", "devait", "quel", "epoque", "résultat", "dépense", "commune", "hamale", "considérable", "page", "telle", "employés", "présence", "députation", "poisson", "établissements", "recette", "existe", "réserve", "utile", "faveur", "parole", "andré", "bulletin", "font", "explications", "surtout", "erreur", "surtout", "plassche", "construction", "établi", "querelle", "devrait", "yseux", "urgence", "régie", "contraire", "examiner", "chargé", "devra", "bonne", "agents", "inspecteur", "élèves", "inspecteur", "publiques", "der", "importance", "admis", "der", "avenir", "emplacement", "direction", "servir", "généraux", "dessus", "égout", "collègue", "délai", "collègue", "actuel", "bauffe", "cannart", "certain", "parfaitement", "sens", "semble", "actuelle", "dame", "seconde", "voulu", "absolumenet", "walravens", "propositions", "installations", "solution", "impôts", "théâtre", "discuter", "usage", "abattoir", "depaire", "besoins", "bétail", "moyens", "devoir", "résulte", "santen", "différentes", "godefroy", "entendu", "dépôt", "halles", "décidé", "sauf", "pouvait", "garantie", "vertu", "richald", "redevance", "conséquent", "dis", "faudrait", "contentieux", "second", "désire", "transport", "exemple", "répondre", "insuffisance", "pavage", "vandergeten", "remise", "devis", "tarif", "largeur", "proposé", "porté", "fer", "delà", "opinion", "déficit", "augmenter", "connaître", "rédaction", "gilles", "époque", "esl", "marchands", "malades", "avenue", "intervenir", "veux", "urinoirs", "cubes", "choses", "difficultés", "parlé", "reçu", "vrai", "peine", "quelle", "affaires", "bureaux", "bénéfices", "dois", "questions", "ninque", "comité", "inconvénents", "compteurs", "bassin", "mommaerts", "élé", "pétition", "gheude", "accepter", "ixelles", "capitale", "feral", "parmi", "communication", "compagnie", "tramways", "consorts", "assurer", "nouvelles", "chemin", "chaussée", "voirie", "pilloy", "colonnes", "résultats", "excédent", "voilà", "programme", "jean", "soumettre", "parties", "échoppes", "rendu", "industrie", "classes", "seule", "présent", "commissaire", "vis", "veut", "porter", "ordinaire", "construire", "conformément", "malgré", "circonstances", "pensions", "surveillance", "conseillers", "nord", "reparations", "longueur", "seul", "pension", "attendu", "legs", "passage", "mot", "laeken", "complètement", "puisse", "incendie", "ajouter", "renseignements", "transports", "ligne", "garde", "tenir", "garde", "impossible", "tenir", "devons", "halle", "spéciaux", "paraît", "vingt", "sise", "avantage", "relatives", "demandé", "concernant", "double", "noire", "termes", "accordée", "annuelle", "moitié", "modifier", "charges", "arriver", "adhésion", "discours", "profit", "achat", "corbillards", "réclamation", "ancienne", "vois", "budgets", "utilité", "sein", "lorsqu", "utilité", "marchés", "monde", "cour", "détail", "architecte", "trouve", "termes", "voilures", "catherine", "viens", "grands", "réponse", "tel", "voyez", "employé", "anvers", "villes", "maladies", "pourraient", "dustin", "contrôle", "séances", "voyez", "réclamations", "ferme", "gouverneur", "gens", "sérieux", "présenté", "constitue", "attendre", "ferme", "ferai", "constaté", "anderlecht", "coût", "permettre", "supercifice", "résolution", "ancienne", "vois", "placer", "reprises", "produits", "proposons", "minque", "passer", "nettoyage", "attention", "places", "forme", "soin", "offre", "rôle", "acquis", "trouver", "commun", "perte", "genre", "anspach", "difficile", "étrangers", "inhumations", "cochers", "propriétaire", "possible", "belge", "chiffres", "parle", "cautionnement", "mobilier", "coût", "permettre", "sais", "enquête", "totale", "main", "occuper", "rappeler", "vouloir", "entrepreneur", "prévisions", "considérables", "convient", "boulevards", "lesquels", "doute", "cession", "ouvriers", "objets", "collègues", "trouver", "dimensions", "quantité", "espèce", "prononcer", "contributions", "inconvenients", "propriétaire", "pechevin", "commerce", "chiffres", "propos", "interpellation", "supprimer", "mots", "admettre", "véritable", "déplacements", "établissement", "maintenir", "transformation", "provisoire", "carrés", "poissonniers", "becquet" ] sw = set(sw) ###Output _____no_output_____ ###Markdown 4. Reconnaissance d'entités nommées avec SpaCy Année = 1879 ###Code n=900000 text = open("../data/txt/Bxl_1879_Tome_I1_Part_1.txt" and "../data/txt/Bxl_1879_Tome_I1_Part_2.txt" and "../data/txt/Bxl_1879_Tome_I1_Part_3.txt" and "../data/txt/Bxl_1879_Tome_I1_Part_4.txt" and "../data/txt/Bxl_1879_Tome_I1_Part_5.txt" and "../data/txt/Bxl_1879_Tome_II1_Part_1.txt" and "../data/txt/Bxl_1879_Tome_II1_Part_10.txt" and "../data/txt/Bxl_1879_Tome_II1_Part_11.txt" and "../data/txt/Bxl_1879_Tome_II1_Part_2.txt" and "../data/txt/Bxl_1879_Tome_II1_Part_3.txt" and "../data/txt/Bxl_1879_Tome_II1_Part_4.txt" and "../data/txt/Bxl_1879_Tome_II1_Part_5.txt" and "../data/txt/Bxl_1879_Tome_II1_Part_6.txt" and "../data/txt/Bxl_1879_Tome_II1_Part_7.txt" and "../data/txt/Bxl_1879_Tome_II1_Part_8.txt" and "../data/txt/Bxl_1879_Tome_II1_Part_9.txt", encoding='utf-8').read()[:n] ###Output _____no_output_____ ###Markdown Résultats :Vauthier apparait 10 fois dans le corpusMessieurs apparait 10 fois dans le corpusBourgmestre apparait 9 fois dans le corpusM. André apparait 9 fois dans le corpusEchevin Vauthier apparait 8 fois dans le corpusM. l'Echevin Delecosse apparait 7 fois dans le corpusEchevin Delecosse apparait 7 fois dans le corpusM. le Bourgmestre apparait 6 fois dans le corpusEchevin apparait 5 fois dans le corpusDoucet apparait 5 fois dans le corpusBockstael apparait 5 fois dans le corpusM. Guillery apparait 5 fois dans le corpusLoyer apparait 4 fois dans le corpusLe Secrétaire apparait 4 fois dans le corpusGodefroy apparait 4 fois dans le corpusAllard apparait 4 fois dans le corpusRichald apparait 4 fois dans le corpusWeber apparait 4 fois dans le corpusM. Walravens apparait 4 fois dans le corpusMosnier apparait 3 fois dans le corpusSubside apparait 3 fois dans le corpusJardin Zoologique apparait 3 fois dans le corpusF. V A N D E R S T R A E T E N apparait 3 fois dans le corpusEchevins apparait 3 fois dans le corpusBecquet apparait 3 fois dans le corpusBeyaert apparait 3 fois dans le corpusDustin apparait 3 fois dans le corpusM. Godefroy apparait 3 fois dans le corpusPropriétés apparait 2 fois dans le corpusM. Mosnier apparait 2 fois dans le corpusSolde apparait 2 fois dans le corpuslotti apparait 2 fois dans le corpusHorloges apparait 2 fois dans le corpusSERVICES SPÉCIAUX apparait 2 fois dans le corpusConstructions apparait 2 fois dans le corpusRecettes apparait 2 fois dans le corpusSecrétaire apparait 2 fois dans le corpusAdoption apparait 2 fois dans le corpusAvis favorable apparait 2 fois dans le corpusTrappeniers apparait 2 fois dans le corpusDemeure apparait 2 fois dans le corpusPilloy apparait 2 fois dans le corpusM. Doucet apparait 2 fois dans le corpusPersister apparait 2 fois dans le corpusœ u r apparait 2 fois dans le corpusM. Delecosse apparait 2 fois dans le corpusM. Durant apparait 2 fois dans le corpusMM. Pigeolet apparait 2 fois dans le corpusMM. Weber apparait 2 fois dans le corpusContenance apparait 1 fois dans le corpus ###Code #### Année = 1881 ###Output _____no_output_____ ###Markdown Charger le texten=900000text = open("../data/txt/Bxl_1881_Tome_I1_Part_1.txt" and"../data/txt/Bxl_1881_Tome_I1_Part_2.txt" and"../data/txt/Bxl_1881_Tome_I1_Part_3.txt" and"../data/txt/Bxl_1881_Tome_I1_Part_4.txt" and"../data/txt/Bxl_1881_Tome_I1_Part_5.txt" and"../data/txt/Bxl_1881_Tome_I1_Part_6.txt" and"../data/txt/Bxl_1881_Tome_I1_Part_7.txt" and"../data/txt/Bxl_1881_Tome_I1_Part_8.txt" and"../data/txt/Bxl_1881_Tome_I1_Part_9.txt" and"../data/txt/Bxl_1881_Tome_I1_Part_10.txt" and"../data/txt/Bxl_1881_Tome_I2_Part_1.txt" and"../data/txt/Bxl_1881_Tome_I2_Part_2.txt" and"../data/txt/Bxl_1881_Tome_I2_Part_3.txt" and"../data/txt/Bxl_1881_Tome_I2_Part_4.txt" and"../data/txt/Bxl_1881_Tome_I2_Part_5.txt" and"../data/txt/Bxl_1881_Tome_I2_Part_6.txt" and "../data/txt/Bxl_1881_Tome_I2_Part_7.txt" and"../data/txt/Bxl_1881_Tome_I2_Part_8.txt" and"../data/txt/Bxl_1881_Tome_I2_Part_9.txt" and"../data/txt/Bxl_1881_Tome_I2_Part_10.txt" and"../data/txt/Bxl_1881_Tome_I2_Part_11.txt" and"../data/txt/Bxl_1881_Tome_I2_Part_12.txt", encoding='utf-8').read()[:n] ###Code Avis apparait 23 fois dans le corpus M. Allard apparait 17 fois dans le corpus Observations apparait 13 fois dans le corpus Proposition apparait 11 fois dans le corpus Richald apparait 10 fois dans le corpus Réclamation apparait 9 fois dans le corpus Adoption apparait 9 fois dans le corpus Dépôt apparait 7 fois dans le corpus M. Richald apparait 7 fois dans le corpus Concession apparait 7 fois dans le corpus Legs apparait 6 fois dans le corpus Offre apparait 6 fois dans le corpus M. Durant apparait 6 fois dans le corpus Terrain apparait 6 fois dans le corpus Interpellation apparait 5 fois dans le corpus M. André apparait 5 fois dans le corpus Allard apparait 5 fois dans le corpus M. Vauthier apparait 5 fois dans le corpus Pilloy apparait 4 fois dans le corpus M. Walravens apparait 4 fois dans le corpus Rejet apparait 4 fois dans le corpus M. Bauffe apparait 3 fois dans le corpus Autorisation apparait 3 fois dans le corpus Observations de M apparait 3 fois dans le corpus R È G apparait 3 fois dans le corpus M. Doucet apparait 3 fois dans le corpus M. Veldekens apparait 3 fois dans le corpus MM. Mennessier apparait 3 fois dans le corpus Donation apparait 3 fois dans le corpus Emplacement apparait 3 fois dans le corpus M. Depaire apparait 3 fois dans le corpus M. l'Echevin Delecosse apparait 3 fois dans le corpus Lettre apparait 3 fois dans le corpus M. Gisler apparait 3 fois dans le corpus Echevin apparait 3 fois dans le corpus M. Van der Plassche apparait 3 fois dans le corpus M. Moreau apparait 2 fois dans le corpus Vauthier apparait 2 fois dans le corpus M. Bauwens apparait 2 fois dans le corpus M. V A N D E apparait 2 fois dans le corpus M. Yseux apparait 2 fois dans le corpus C R É D I apparait 2 fois dans le corpus Bourgmestre apparait 2 fois dans le corpus Décision apparait 2 fois dans le corpus MM. Burke apparait 2 fois dans le corpus Rouppe apparait 2 fois dans le corpus Démission apparait 2 fois dans le corpus M E S U apparait 2 fois dans le corpus Peeters apparait 2 fois dans le corpus Observation de M apparait 2 fois dans le corpus ###Output _____no_output_____
notebooks/Comparing-TF-and-PT-models.ipynb
###Markdown Comparing TensorFlow (original) and PyTorch modelsYou can use this small notebook to check the conversion of the model's weights from the TensorFlow model to the PyTorch model. In the following, we compare the weights of the last layer on a simple example (in `input.txt`) but both models returns all the hidden layers so you can check every stage of the model.To run this notebook, follow these instructions:- make sure that your Python environment has both TensorFlow and PyTorch installed,- download the original TensorFlow implementation,- download a pre-trained TensorFlow model as indicaded in the TensorFlow implementation readme,- run the script `convert_tf_checkpoint_to_pytorch.py` as indicated in the `README` to convert the pre-trained TensorFlow model to PyTorch.If needed change the relative paths indicated in this notebook (at the beggining of Sections 1 and 2) to point to the relevent models and code. ###Code import os os.chdir('../') ###Output _____no_output_____ ###Markdown 1/ TensorFlow code ###Code original_tf_inplem_dir = "./tensorflow_code/" model_dir = "../google_models/uncased_L-12_H-768_A-12/" vocab_file = model_dir + "vocab.txt" bert_config_file = model_dir + "bert_config.json" init_checkpoint = model_dir + "bert_model.ckpt" input_file = "./samples/input.txt" max_seq_length = 128 import importlib.util import sys spec = importlib.util.spec_from_file_location('*', original_tf_inplem_dir + '/extract_features_tensorflow.py') module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) sys.modules['extract_features_tensorflow'] = module from extract_features_tensorflow import * layer_indexes = list(range(12)) bert_config = modeling.BertConfig.from_json_file(bert_config_file) tokenizer = tokenization.FullTokenizer( vocab_file=vocab_file, do_lower_case=True) examples = read_examples(input_file) features = convert_examples_to_features( examples=examples, seq_length=max_seq_length, tokenizer=tokenizer) unique_id_to_feature = {} for feature in features: unique_id_to_feature[feature.unique_id] = feature is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2 run_config = tf.contrib.tpu.RunConfig( master=None, tpu_config=tf.contrib.tpu.TPUConfig( num_shards=1, per_host_input_for_training=is_per_host)) model_fn = model_fn_builder( bert_config=bert_config, init_checkpoint=init_checkpoint, layer_indexes=layer_indexes, use_tpu=False, use_one_hot_embeddings=False) # If TPU is not available, this will fall back to normal Estimator on CPU # or GPU. estimator = tf.contrib.tpu.TPUEstimator( use_tpu=False, model_fn=model_fn, config=run_config, predict_batch_size=1) input_fn = input_fn_builder( features=features, seq_length=max_seq_length) tensorflow_all_out = [] for result in estimator.predict(input_fn, yield_single_examples=True): unique_id = int(result["unique_id"]) feature = unique_id_to_feature[unique_id] output_json = collections.OrderedDict() output_json["linex_index"] = unique_id tensorflow_all_out_features = [] # for (i, token) in enumerate(feature.tokens): all_layers = [] for (j, layer_index) in enumerate(layer_indexes): print("extracting layer {}".format(j)) layer_output = result["layer_output_%d" % j] layers = collections.OrderedDict() layers["index"] = layer_index layers["values"] = layer_output all_layers.append(layers) tensorflow_out_features = collections.OrderedDict() tensorflow_out_features["layers"] = all_layers tensorflow_all_out_features.append(tensorflow_out_features) output_json["features"] = tensorflow_all_out_features tensorflow_all_out.append(output_json) print(len(tensorflow_all_out)) print(len(tensorflow_all_out[0])) print(tensorflow_all_out[0].keys()) print("number of tokens", len(tensorflow_all_out[0]['features'])) print("number of layers", len(tensorflow_all_out[0]['features'][0]['layers'])) tensorflow_all_out[0]['features'][0]['layers'][0]['values'].shape tensorflow_outputs = list(tensorflow_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes) ###Output _____no_output_____ ###Markdown 2/ PyTorch code ###Code os.chdir('./examples') import extract_features import pytorch_transformers as ppb from extract_features import * init_checkpoint_pt = "../../google_models/uncased_L-12_H-768_A-12/" device = torch.device("cpu") model = ppb.BertModel.from_pretrained(init_checkpoint_pt) model.to(device) all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long) all_input_type_ids = torch.tensor([f.input_type_ids for f in features], dtype=torch.long) all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.long) eval_data = TensorDataset(all_input_ids, all_input_mask, all_input_type_ids, all_example_index) eval_sampler = SequentialSampler(eval_data) eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=1) model.eval() layer_indexes = list(range(12)) pytorch_all_out = [] for input_ids, input_mask, input_type_ids, example_indices in eval_dataloader: print(input_ids) print(input_mask) print(example_indices) input_ids = input_ids.to(device) input_mask = input_mask.to(device) all_encoder_layers, _ = model(input_ids, token_type_ids=input_type_ids, attention_mask=input_mask) for b, example_index in enumerate(example_indices): feature = features[example_index.item()] unique_id = int(feature.unique_id) # feature = unique_id_to_feature[unique_id] output_json = collections.OrderedDict() output_json["linex_index"] = unique_id all_out_features = [] # for (i, token) in enumerate(feature.tokens): all_layers = [] for (j, layer_index) in enumerate(layer_indexes): print("layer", j, layer_index) layer_output = all_encoder_layers[int(layer_index)].detach().cpu().numpy() layer_output = layer_output[b] layers = collections.OrderedDict() layers["index"] = layer_index layer_output = layer_output layers["values"] = layer_output if not isinstance(layer_output, (int, float)) else [layer_output] all_layers.append(layers) out_features = collections.OrderedDict() out_features["layers"] = all_layers all_out_features.append(out_features) output_json["features"] = all_out_features pytorch_all_out.append(output_json) print(len(pytorch_all_out)) print(len(pytorch_all_out[0])) print(pytorch_all_out[0].keys()) print("number of tokens", len(pytorch_all_out)) print("number of layers", len(pytorch_all_out[0]['features'][0]['layers'])) print("hidden_size", len(pytorch_all_out[0]['features'][0]['layers'][0]['values'])) pytorch_all_out[0]['features'][0]['layers'][0]['values'].shape pytorch_outputs = list(pytorch_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes) print(pytorch_outputs[0].shape) print(pytorch_outputs[1].shape) print(tensorflow_outputs[0].shape) print(tensorflow_outputs[1].shape) ###Output (128, 768) (128, 768) ###Markdown 3/ Comparing the standard deviation on the last layer of both models ###Code import numpy as np print('shape tensorflow layer, shape pytorch layer, standard deviation') print('\n'.join(list(str((np.array(tensorflow_outputs[i]).shape, np.array(pytorch_outputs[i]).shape, np.sqrt(np.mean((np.array(tensorflow_outputs[i]) - np.array(pytorch_outputs[i]))**2.0)))) for i in range(12)))) ###Output shape tensorflow layer, shape pytorch layer, standard deviation ((128, 768), (128, 768), 1.5258875e-07) ((128, 768), (128, 768), 2.342731e-07) ((128, 768), (128, 768), 2.801949e-07) ((128, 768), (128, 768), 3.5904986e-07) ((128, 768), (128, 768), 4.2842768e-07) ((128, 768), (128, 768), 5.127951e-07) ((128, 768), (128, 768), 6.14668e-07) ((128, 768), (128, 768), 7.063922e-07) ((128, 768), (128, 768), 7.906173e-07) ((128, 768), (128, 768), 8.475192e-07) ((128, 768), (128, 768), 8.975489e-07) ((128, 768), (128, 768), 4.1671223e-07) ###Markdown Comparing TensorFlow (original) and PyTorch modelsYou can use this small notebook to check the conversion of the model's weights from the TensorFlow model to the PyTorch model. In the following, we compare the weights of the last layer on a simple example (in `input.txt`) but both models returns all the hidden layers so you can check every stage of the model.To run this notebook, follow these instructions:- make sure that your Python environment has both TensorFlow and PyTorch installed,- download the original TensorFlow implementation,- download a pre-trained TensorFlow model as indicaded in the TensorFlow implementation readme,- run the script `convert_tf_checkpoint_to_pytorch.py` as indicated in the `README` to convert the pre-trained TensorFlow model to PyTorch.If needed change the relative paths indicated in this notebook (at the beggining of Sections 1 and 2) to point to the relevent models and code. ###Code import os os.chdir('../') ###Output _____no_output_____ ###Markdown 1/ TensorFlow code ###Code original_tf_inplem_dir = "./tensorflow_code/" model_dir = "../google_models/uncased_L-12_H-768_A-12/" vocab_file = model_dir + "vocab.txt" bert_config_file = model_dir + "bert_config.json" init_checkpoint = model_dir + "bert_model.ckpt" input_file = "./samples/input.txt" max_seq_length = 128 import importlib.util import sys spec = importlib.util.spec_from_file_location('*', original_tf_inplem_dir + '/extract_features_tensorflow.py') module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) sys.modules['extract_features_tensorflow'] = module from extract_features_tensorflow import * layer_indexes = list(range(12)) bert_config = modeling.BertConfig.from_json_file(bert_config_file) tokenizer = tokenization.FullTokenizer( vocab_file=vocab_file, do_lower_case=True) examples = read_examples(input_file) features = convert_examples_to_features( examples=examples, seq_length=max_seq_length, tokenizer=tokenizer) unique_id_to_feature = {} for feature in features: unique_id_to_feature[feature.unique_id] = feature is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2 run_config = tf.contrib.tpu.RunConfig( master=None, tpu_config=tf.contrib.tpu.TPUConfig( num_shards=1, per_host_input_for_training=is_per_host)) model_fn = model_fn_builder( bert_config=bert_config, init_checkpoint=init_checkpoint, layer_indexes=layer_indexes, use_tpu=False, use_one_hot_embeddings=False) # If TPU is not available, this will fall back to normal Estimator on CPU # or GPU. estimator = tf.contrib.tpu.TPUEstimator( use_tpu=False, model_fn=model_fn, config=run_config, predict_batch_size=1) input_fn = input_fn_builder( features=features, seq_length=max_seq_length) tensorflow_all_out = [] for result in estimator.predict(input_fn, yield_single_examples=True): unique_id = int(result["unique_id"]) feature = unique_id_to_feature[unique_id] output_json = collections.OrderedDict() output_json["linex_index"] = unique_id tensorflow_all_out_features = [] # for (i, token) in enumerate(feature.tokens): all_layers = [] for (j, layer_index) in enumerate(layer_indexes): print("extracting layer {}".format(j)) layer_output = result["layer_output_%d" % j] layers = collections.OrderedDict() layers["index"] = layer_index layers["values"] = layer_output all_layers.append(layers) tensorflow_out_features = collections.OrderedDict() tensorflow_out_features["layers"] = all_layers tensorflow_all_out_features.append(tensorflow_out_features) output_json["features"] = tensorflow_all_out_features tensorflow_all_out.append(output_json) print(len(tensorflow_all_out)) print(len(tensorflow_all_out[0])) print(tensorflow_all_out[0].keys()) print("number of tokens", len(tensorflow_all_out[0]['features'])) print("number of layers", len(tensorflow_all_out[0]['features'][0]['layers'])) tensorflow_all_out[0]['features'][0]['layers'][0]['values'].shape tensorflow_outputs = list(tensorflow_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes) ###Output _____no_output_____ ###Markdown 2/ PyTorch code ###Code os.chdir('./examples') import extract_features import pytorch_pretrained_bert as ppb from extract_features import * init_checkpoint_pt = "../../google_models/uncased_L-12_H-768_A-12/" device = torch.device("cpu") model = ppb.BertModel.from_pretrained(init_checkpoint_pt) model.to(device) all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long) all_input_type_ids = torch.tensor([f.input_type_ids for f in features], dtype=torch.long) all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.long) eval_data = TensorDataset(all_input_ids, all_input_mask, all_input_type_ids, all_example_index) eval_sampler = SequentialSampler(eval_data) eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=1) model.eval() layer_indexes = list(range(12)) pytorch_all_out = [] for input_ids, input_mask, input_type_ids, example_indices in eval_dataloader: print(input_ids) print(input_mask) print(example_indices) input_ids = input_ids.to(device) input_mask = input_mask.to(device) all_encoder_layers, _ = model(input_ids, token_type_ids=input_type_ids, attention_mask=input_mask) for b, example_index in enumerate(example_indices): feature = features[example_index.item()] unique_id = int(feature.unique_id) # feature = unique_id_to_feature[unique_id] output_json = collections.OrderedDict() output_json["linex_index"] = unique_id all_out_features = [] # for (i, token) in enumerate(feature.tokens): all_layers = [] for (j, layer_index) in enumerate(layer_indexes): print("layer", j, layer_index) layer_output = all_encoder_layers[int(layer_index)].detach().cpu().numpy() layer_output = layer_output[b] layers = collections.OrderedDict() layers["index"] = layer_index layer_output = layer_output layers["values"] = layer_output if not isinstance(layer_output, (int, float)) else [layer_output] all_layers.append(layers) out_features = collections.OrderedDict() out_features["layers"] = all_layers all_out_features.append(out_features) output_json["features"] = all_out_features pytorch_all_out.append(output_json) print(len(pytorch_all_out)) print(len(pytorch_all_out[0])) print(pytorch_all_out[0].keys()) print("number of tokens", len(pytorch_all_out)) print("number of layers", len(pytorch_all_out[0]['features'][0]['layers'])) print("hidden_size", len(pytorch_all_out[0]['features'][0]['layers'][0]['values'])) pytorch_all_out[0]['features'][0]['layers'][0]['values'].shape pytorch_outputs = list(pytorch_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes) print(pytorch_outputs[0].shape) print(pytorch_outputs[1].shape) print(tensorflow_outputs[0].shape) print(tensorflow_outputs[1].shape) ###Output (128, 768) (128, 768) ###Markdown 3/ Comparing the standard deviation on the last layer of both models ###Code import numpy as np print('shape tensorflow layer, shape pytorch layer, standard deviation') print('\n'.join(list(str((np.array(tensorflow_outputs[i]).shape, np.array(pytorch_outputs[i]).shape, np.sqrt(np.mean((np.array(tensorflow_outputs[i]) - np.array(pytorch_outputs[i]))**2.0)))) for i in range(12)))) ###Output shape tensorflow layer, shape pytorch layer, standard deviation ((128, 768), (128, 768), 1.5258875e-07) ((128, 768), (128, 768), 2.342731e-07) ((128, 768), (128, 768), 2.801949e-07) ((128, 768), (128, 768), 3.5904986e-07) ((128, 768), (128, 768), 4.2842768e-07) ((128, 768), (128, 768), 5.127951e-07) ((128, 768), (128, 768), 6.14668e-07) ((128, 768), (128, 768), 7.063922e-07) ((128, 768), (128, 768), 7.906173e-07) ((128, 768), (128, 768), 8.475192e-07) ((128, 768), (128, 768), 8.975489e-07) ((128, 768), (128, 768), 4.1671223e-07) ###Markdown Comparing TensorFlow (original) and PyTorch modelsYou can use this small notebook to check the conversion of the model's weights from the TensorFlow model to the PyTorch model. In the following, we compare the weights of the last layer on a simple example (in `input.txt`) but both models returns all the hidden layers so you can check every stage of the model.To run this notebook, follow these instructions:- make sure that your Python environment has both TensorFlow and PyTorch installed,- download the original TensorFlow implementation,- download a pre-trained TensorFlow model as indicaded in the TensorFlow implementation readme,- run the script `convert_tf_checkpoint_to_pytorch.py` as indicated in the `README` to convert the pre-trained TensorFlow model to PyTorch.If needed change the relative paths indicated in this notebook (at the beggining of Sections 1 and 2) to point to the relevent models and code. ###Code import os os.chdir('../') ###Output _____no_output_____ ###Markdown 1/ TensorFlow code ###Code original_tf_inplem_dir = "./tensorflow_code/" model_dir = "../google_models/uncased_L-12_H-768_A-12/" vocab_file = model_dir + "vocab.txt" bert_config_file = model_dir + "bert_config.json" init_checkpoint = model_dir + "bert_model.ckpt" input_file = "./samples/input.txt" max_seq_length = 128 import importlib.util import sys spec = importlib.util.spec_from_file_location('*', original_tf_inplem_dir + '/extract_features_tensorflow.py') module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) sys.modules['extract_features_tensorflow'] = module from extract_features_tensorflow import * layer_indexes = list(range(12)) bert_config = modeling.BertConfig.from_json_file(bert_config_file) tokenizer = tokenization.FullTokenizer( vocab_file=vocab_file, do_lower_case=True) examples = read_examples(input_file) features = convert_examples_to_features( examples=examples, seq_length=max_seq_length, tokenizer=tokenizer) unique_id_to_feature = {} for feature in features: unique_id_to_feature[feature.unique_id] = feature is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2 run_config = tf.contrib.tpu.RunConfig( master=None, tpu_config=tf.contrib.tpu.TPUConfig( num_shards=1, per_host_input_for_training=is_per_host)) model_fn = model_fn_builder( bert_config=bert_config, init_checkpoint=init_checkpoint, layer_indexes=layer_indexes, use_tpu=False, use_one_hot_embeddings=False) # If TPU is not available, this will fall back to normal Estimator on CPU # or GPU. estimator = tf.contrib.tpu.TPUEstimator( use_tpu=False, model_fn=model_fn, config=run_config, predict_batch_size=1) input_fn = input_fn_builder( features=features, seq_length=max_seq_length) tensorflow_all_out = [] for result in estimator.predict(input_fn, yield_single_examples=True): unique_id = int(result["unique_id"]) feature = unique_id_to_feature[unique_id] output_json = collections.OrderedDict() output_json["linex_index"] = unique_id tensorflow_all_out_features = [] # for (i, token) in enumerate(feature.tokens): all_layers = [] for (j, layer_index) in enumerate(layer_indexes): print("extracting layer {}".format(j)) layer_output = result["layer_output_%d" % j] layers = collections.OrderedDict() layers["index"] = layer_index layers["values"] = layer_output all_layers.append(layers) tensorflow_out_features = collections.OrderedDict() tensorflow_out_features["layers"] = all_layers tensorflow_all_out_features.append(tensorflow_out_features) output_json["features"] = tensorflow_all_out_features tensorflow_all_out.append(output_json) print(len(tensorflow_all_out)) print(len(tensorflow_all_out[0])) print(tensorflow_all_out[0].keys()) print("number of tokens", len(tensorflow_all_out[0]['features'])) print("number of layers", len(tensorflow_all_out[0]['features'][0]['layers'])) tensorflow_all_out[0]['features'][0]['layers'][0]['values'].shape tensorflow_outputs = list(tensorflow_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes) ###Output _____no_output_____ ###Markdown 2/ PyTorch code ###Code os.chdir('./examples') import extract_features import pytorch_pretrained_bert as ppb from extract_features import * init_checkpoint_pt = "../../google_models/uncased_L-12_H-768_A-12/" device = torch.device("cpu") model = ppb.BertModel.from_pretrained(init_checkpoint_pt) model.to(device) all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long) all_input_type_ids = torch.tensor([f.input_type_ids for f in features], dtype=torch.long) all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.long) eval_data = TensorDataset(all_input_ids, all_input_mask, all_input_type_ids, all_example_index) eval_sampler = SequentialSampler(eval_data) eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=1) model.eval() layer_indexes = list(range(12)) pytorch_all_out = [] for input_ids, input_mask, input_type_ids, example_indices in eval_dataloader: print(input_ids) print(input_mask) print(example_indices) input_ids = input_ids.to(device) input_mask = input_mask.to(device) all_encoder_layers, _ = model(input_ids, token_type_ids=input_type_ids, attention_mask=input_mask) for b, example_index in enumerate(example_indices): feature = features[example_index.item()] unique_id = int(feature.unique_id) # feature = unique_id_to_feature[unique_id] output_json = collections.OrderedDict() output_json["linex_index"] = unique_id all_out_features = [] # for (i, token) in enumerate(feature.tokens): all_layers = [] for (j, layer_index) in enumerate(layer_indexes): print("layer", j, layer_index) layer_output = all_encoder_layers[int(layer_index)].detach().cpu().numpy() layer_output = layer_output[b] layers = collections.OrderedDict() layers["index"] = layer_index layer_output = layer_output layers["values"] = layer_output if not isinstance(layer_output, (int, float)) else [layer_output] all_layers.append(layers) out_features = collections.OrderedDict() out_features["layers"] = all_layers all_out_features.append(out_features) output_json["features"] = all_out_features pytorch_all_out.append(output_json) print(len(pytorch_all_out)) print(len(pytorch_all_out[0])) print(pytorch_all_out[0].keys()) print("number of tokens", len(pytorch_all_out)) print("number of layers", len(pytorch_all_out[0]['features'][0]['layers'])) print("hidden_size", len(pytorch_all_out[0]['features'][0]['layers'][0]['values'])) pytorch_all_out[0]['features'][0]['layers'][0]['values'].shape pytorch_outputs = list(pytorch_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes) print(pytorch_outputs[0].shape) print(pytorch_outputs[1].shape) print(tensorflow_outputs[0].shape) print(tensorflow_outputs[1].shape) ###Output (128, 768) (128, 768) ###Markdown 3/ Comparing the standard deviation on the last layer of both models ###Code import numpy as np print('shape tensorflow layer, shape pytorch layer, standard deviation') print('\n'.join(list(str((np.array(tensorflow_outputs[i]).shape, np.array(pytorch_outputs[i]).shape, np.sqrt(np.mean((np.array(tensorflow_outputs[i]) - np.array(pytorch_outputs[i]))**2.0)))) for i in range(12)))) ###Output shape tensorflow layer, shape pytorch layer, standard deviation ((128, 768), (128, 768), 1.5258875e-07) ((128, 768), (128, 768), 2.342731e-07) ((128, 768), (128, 768), 2.801949e-07) ((128, 768), (128, 768), 3.5904986e-07) ((128, 768), (128, 768), 4.2842768e-07) ((128, 768), (128, 768), 5.127951e-07) ((128, 768), (128, 768), 6.14668e-07) ((128, 768), (128, 768), 7.063922e-07) ((128, 768), (128, 768), 7.906173e-07) ((128, 768), (128, 768), 8.475192e-07) ((128, 768), (128, 768), 8.975489e-07) ((128, 768), (128, 768), 4.1671223e-07) ###Markdown Comparing TensorFlow (original) and PyTorch modelsYou can use this small notebook to check the conversion of the model's weights from the TensorFlow model to the PyTorch model. In the following, we compare the weights of the last layer on a simple example (in `input.txt`) but both models returns all the hidden layers so you can check every stage of the model.To run this notebook, follow these instructions:- make sure that your Python environment has both TensorFlow and PyTorch installed,- download the original TensorFlow implementation,- download a pre-trained TensorFlow model as indicaded in the TensorFlow implementation readme,- run the script `convert_tf_checkpoint_to_pytorch.py` as indicated in the `README` to convert the pre-trained TensorFlow model to PyTorch.If needed change the relative paths indicated in this notebook (at the beggining of Sections 1 and 2) to point to the relevent models and code. ###Code import os os.chdir('../') ###Output _____no_output_____ ###Markdown 1/ TensorFlow code ###Code original_tf_inplem_dir = "./tensorflow_code/" model_dir = "../google_models/uncased_L-12_H-768_A-12/" vocab_file = model_dir + "vocab.txt" bert_config_file = model_dir + "bert_config.json" init_checkpoint = model_dir + "bert_model.ckpt" input_file = "./samples/input.txt" max_seq_length = 128 import importlib.util import sys spec = importlib.util.spec_from_file_location('*', original_tf_inplem_dir + '/extract_features_tensorflow.py') module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) sys.modules['extract_features_tensorflow'] = module from extract_features_tensorflow import * layer_indexes = list(range(12)) bert_config = modeling.BertConfig.from_json_file(bert_config_file) tokenizer = tokenization.FullTokenizer( vocab_file=vocab_file, do_lower_case=True) examples = read_examples(input_file) features = convert_examples_to_features( examples=examples, seq_length=max_seq_length, tokenizer=tokenizer) unique_id_to_feature = {} for feature in features: unique_id_to_feature[feature.unique_id] = feature is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2 run_config = tf.contrib.tpu.RunConfig( master=None, tpu_config=tf.contrib.tpu.TPUConfig( num_shards=1, per_host_input_for_training=is_per_host)) model_fn = model_fn_builder( bert_config=bert_config, init_checkpoint=init_checkpoint, layer_indexes=layer_indexes, use_tpu=False, use_one_hot_embeddings=False) # If TPU is not available, this will fall back to normal Estimator on CPU # or GPU. estimator = tf.contrib.tpu.TPUEstimator( use_tpu=False, model_fn=model_fn, config=run_config, predict_batch_size=1) input_fn = input_fn_builder( features=features, seq_length=max_seq_length) tensorflow_all_out = [] for result in estimator.predict(input_fn, yield_single_examples=True): unique_id = int(result["unique_id"]) feature = unique_id_to_feature[unique_id] output_json = collections.OrderedDict() output_json["linex_index"] = unique_id tensorflow_all_out_features = [] # for (i, token) in enumerate(feature.tokens): all_layers = [] for (j, layer_index) in enumerate(layer_indexes): print("extracting layer {}".format(j)) layer_output = result["layer_output_%d" % j] layers = collections.OrderedDict() layers["index"] = layer_index layers["values"] = layer_output all_layers.append(layers) tensorflow_out_features = collections.OrderedDict() tensorflow_out_features["layers"] = all_layers tensorflow_all_out_features.append(tensorflow_out_features) output_json["features"] = tensorflow_all_out_features tensorflow_all_out.append(output_json) print(len(tensorflow_all_out)) print(len(tensorflow_all_out[0])) print(tensorflow_all_out[0].keys()) print("number of tokens", len(tensorflow_all_out[0]['features'])) print("number of layers", len(tensorflow_all_out[0]['features'][0]['layers'])) tensorflow_all_out[0]['features'][0]['layers'][0]['values'].shape tensorflow_outputs = list(tensorflow_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes) ###Output _____no_output_____ ###Markdown 2/ PyTorch code ###Code os.chdir('./examples') import extract_features import transformers as ppb from extract_features import * init_checkpoint_pt = "../../google_models/uncased_L-12_H-768_A-12/" device = torch.device("cpu") model = ppb.BertModel.from_pretrained(init_checkpoint_pt) model.to(device) all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long) all_input_type_ids = torch.tensor([f.input_type_ids for f in features], dtype=torch.long) all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.long) eval_data = TensorDataset(all_input_ids, all_input_mask, all_input_type_ids, all_example_index) eval_sampler = SequentialSampler(eval_data) eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=1) model.eval() layer_indexes = list(range(12)) pytorch_all_out = [] for input_ids, input_mask, input_type_ids, example_indices in eval_dataloader: print(input_ids) print(input_mask) print(example_indices) input_ids = input_ids.to(device) input_mask = input_mask.to(device) all_encoder_layers, _ = model(input_ids, token_type_ids=input_type_ids, attention_mask=input_mask) for b, example_index in enumerate(example_indices): feature = features[example_index.item()] unique_id = int(feature.unique_id) # feature = unique_id_to_feature[unique_id] output_json = collections.OrderedDict() output_json["linex_index"] = unique_id all_out_features = [] # for (i, token) in enumerate(feature.tokens): all_layers = [] for (j, layer_index) in enumerate(layer_indexes): print("layer", j, layer_index) layer_output = all_encoder_layers[int(layer_index)].detach().cpu().numpy() layer_output = layer_output[b] layers = collections.OrderedDict() layers["index"] = layer_index layer_output = layer_output layers["values"] = layer_output if not isinstance(layer_output, (int, float)) else [layer_output] all_layers.append(layers) out_features = collections.OrderedDict() out_features["layers"] = all_layers all_out_features.append(out_features) output_json["features"] = all_out_features pytorch_all_out.append(output_json) print(len(pytorch_all_out)) print(len(pytorch_all_out[0])) print(pytorch_all_out[0].keys()) print("number of tokens", len(pytorch_all_out)) print("number of layers", len(pytorch_all_out[0]['features'][0]['layers'])) print("hidden_size", len(pytorch_all_out[0]['features'][0]['layers'][0]['values'])) pytorch_all_out[0]['features'][0]['layers'][0]['values'].shape pytorch_outputs = list(pytorch_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes) print(pytorch_outputs[0].shape) print(pytorch_outputs[1].shape) print(tensorflow_outputs[0].shape) print(tensorflow_outputs[1].shape) ###Output (128, 768) (128, 768) ###Markdown 3/ Comparing the standard deviation on the last layer of both models ###Code import numpy as np print('shape tensorflow layer, shape pytorch layer, standard deviation') print('\n'.join(list(str((np.array(tensorflow_outputs[i]).shape, np.array(pytorch_outputs[i]).shape, np.sqrt(np.mean((np.array(tensorflow_outputs[i]) - np.array(pytorch_outputs[i]))**2.0)))) for i in range(12)))) ###Output shape tensorflow layer, shape pytorch layer, standard deviation ((128, 768), (128, 768), 1.5258875e-07) ((128, 768), (128, 768), 2.342731e-07) ((128, 768), (128, 768), 2.801949e-07) ((128, 768), (128, 768), 3.5904986e-07) ((128, 768), (128, 768), 4.2842768e-07) ((128, 768), (128, 768), 5.127951e-07) ((128, 768), (128, 768), 6.14668e-07) ((128, 768), (128, 768), 7.063922e-07) ((128, 768), (128, 768), 7.906173e-07) ((128, 768), (128, 768), 8.475192e-07) ((128, 768), (128, 768), 8.975489e-07) ((128, 768), (128, 768), 4.1671223e-07) ###Markdown Comparing TensorFlow (original) and PyTorch modelsYou can use this small notebook to check the conversion of the model's weights from the TensorFlow model to the PyTorch model. In the following, we compare the weights of the last layer on a simple example (in `input.txt`) but both models returns all the hidden layers so you can check every stage of the model.To run this notebook, follow these instructions:- make sure that your Python environment has both TensorFlow and PyTorch installed,- download the original TensorFlow implementation,- download a pre-trained TensorFlow model as indicaded in the TensorFlow implementation readme,- run the script `convert_tf_checkpoint_to_pytorch.py` as indicated in the `README` to convert the pre-trained TensorFlow model to PyTorch.If needed change the relative paths indicated in this notebook (at the beggining of Sections 1 and 2) to point to the relevent models and code. ###Code import os os.chdir('../') ###Output _____no_output_____ ###Markdown 1/ TensorFlow code ###Code original_tf_inplem_dir = "./tensorflow_code/" model_dir = "../google_models/uncased_L-12_H-768_A-12/" vocab_file = model_dir + "vocab.txt" bert_config_file = model_dir + "bert_config.json" init_checkpoint = model_dir + "bert_model.ckpt" input_file = "./samples/input.txt" max_seq_length = 128 import importlib.util import sys spec = importlib.util.spec_from_file_location('*', original_tf_inplem_dir + '/extract_features_tensorflow.py') module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) sys.modules['extract_features_tensorflow'] = module from extract_features_tensorflow import * layer_indexes = list(range(12)) bert_config = modeling.BertConfig.from_json_file(bert_config_file) tokenizer = tokenization.FullTokenizer( vocab_file=vocab_file, do_lower_case=True) examples = read_examples(input_file) features = convert_examples_to_features( examples=examples, seq_length=max_seq_length, tokenizer=tokenizer) unique_id_to_feature = {} for feature in features: unique_id_to_feature[feature.unique_id] = feature is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2 run_config = tf.contrib.tpu.RunConfig( master=None, tpu_config=tf.contrib.tpu.TPUConfig( num_shards=1, per_host_input_for_training=is_per_host)) model_fn = model_fn_builder( bert_config=bert_config, init_checkpoint=init_checkpoint, layer_indexes=layer_indexes, use_tpu=False, use_one_hot_embeddings=False) # If TPU is not available, this will fall back to normal Estimator on CPU # or GPU. estimator = tf.contrib.tpu.TPUEstimator( use_tpu=False, model_fn=model_fn, config=run_config, predict_batch_size=1) input_fn = input_fn_builder( features=features, seq_length=max_seq_length) tensorflow_all_out = [] for result in estimator.predict(input_fn, yield_single_examples=True): unique_id = int(result["unique_id"]) feature = unique_id_to_feature[unique_id] output_json = collections.OrderedDict() output_json["linex_index"] = unique_id tensorflow_all_out_features = [] # for (i, token) in enumerate(feature.tokens): all_layers = [] for (j, layer_index) in enumerate(layer_indexes): print("extracting layer {}".format(j)) layer_output = result["layer_output_%d" % j] layers = collections.OrderedDict() layers["index"] = layer_index layers["values"] = layer_output all_layers.append(layers) tensorflow_out_features = collections.OrderedDict() tensorflow_out_features["layers"] = all_layers tensorflow_all_out_features.append(tensorflow_out_features) output_json["features"] = tensorflow_all_out_features tensorflow_all_out.append(output_json) print(len(tensorflow_all_out)) print(len(tensorflow_all_out[0])) print(tensorflow_all_out[0].keys()) print("number of tokens", len(tensorflow_all_out[0]['features'])) print("number of layers", len(tensorflow_all_out[0]['features'][0]['layers'])) tensorflow_all_out[0]['features'][0]['layers'][0]['values'].shape tensorflow_outputs = list(tensorflow_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes) ###Output _____no_output_____ ###Markdown 2/ PyTorch code ###Code os.chdir('./examples') import extract_features import pytorch_transformers as ppb from extract_features import * init_checkpoint_pt = "../../google_models/uncased_L-12_H-768_A-12/" device = torch.device("cpu") model = ppb.BertModel.from_pretrained(init_checkpoint_pt) model.to(device) all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long) all_input_type_ids = torch.tensor([f.input_type_ids for f in features], dtype=torch.long) all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.long) eval_data = TensorDataset(all_input_ids, all_input_mask, all_input_type_ids, all_example_index) eval_sampler = SequentialSampler(eval_data) eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=1) model.eval() layer_indexes = list(range(12)) pytorch_all_out = [] for input_ids, input_mask, input_type_ids, example_indices in eval_dataloader: print(input_ids) print(input_mask) print(example_indices) input_ids = input_ids.to(device) input_mask = input_mask.to(device) all_encoder_layers, _ = model(input_ids, token_type_ids=input_type_ids, attention_mask=input_mask) for b, example_index in enumerate(example_indices): feature = features[example_index.item()] unique_id = int(feature.unique_id) # feature = unique_id_to_feature[unique_id] output_json = collections.OrderedDict() output_json["linex_index"] = unique_id all_out_features = [] # for (i, token) in enumerate(feature.tokens): all_layers = [] for (j, layer_index) in enumerate(layer_indexes): print("layer", j, layer_index) layer_output = all_encoder_layers[int(layer_index)].detach().cpu().numpy() layer_output = layer_output[b] layers = collections.OrderedDict() layers["index"] = layer_index layer_output = layer_output layers["values"] = layer_output if not isinstance(layer_output, (int, float)) else [layer_output] all_layers.append(layers) out_features = collections.OrderedDict() out_features["layers"] = all_layers all_out_features.append(out_features) output_json["features"] = all_out_features pytorch_all_out.append(output_json) print(len(pytorch_all_out)) print(len(pytorch_all_out[0])) print(pytorch_all_out[0].keys()) print("number of tokens", len(pytorch_all_out)) print("number of layers", len(pytorch_all_out[0]['features'][0]['layers'])) print("hidden_size", len(pytorch_all_out[0]['features'][0]['layers'][0]['values'])) pytorch_all_out[0]['features'][0]['layers'][0]['values'].shape pytorch_outputs = list(pytorch_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes) print(pytorch_outputs[0].shape) print(pytorch_outputs[1].shape) print(tensorflow_outputs[0].shape) print(tensorflow_outputs[1].shape) ###Output (128, 768) (128, 768) ###Markdown 3/ Comparing the standard deviation on the last layer of both models ###Code import numpy as np print('shape tensorflow layer, shape pytorch layer, standard deviation') print('\n'.join(list(str((np.array(tensorflow_outputs[i]).shape, np.array(pytorch_outputs[i]).shape, np.sqrt(np.mean((np.array(tensorflow_outputs[i]) - np.array(pytorch_outputs[i]))**2.0)))) for i in range(12)))) ###Output shape tensorflow layer, shape pytorch layer, standard deviation ((128, 768), (128, 768), 1.5258875e-07) ((128, 768), (128, 768), 2.342731e-07) ((128, 768), (128, 768), 2.801949e-07) ((128, 768), (128, 768), 3.5904986e-07) ((128, 768), (128, 768), 4.2842768e-07) ((128, 768), (128, 768), 5.127951e-07) ((128, 768), (128, 768), 6.14668e-07) ((128, 768), (128, 768), 7.063922e-07) ((128, 768), (128, 768), 7.906173e-07) ((128, 768), (128, 768), 8.475192e-07) ((128, 768), (128, 768), 8.975489e-07) ((128, 768), (128, 768), 4.1671223e-07) ###Markdown Comparing TensorFlow (original) and PyTorch modelsYou can use this small notebook to check the conversion of the model's weights from the TensorFlow model to the PyTorch model. In the following, we compare the weights of the last layer on a simple example (in `input.txt`) but both models returns all the hidden layers so you can check every stage of the model.To run this notebook, follow these instructions:- make sure that your Python environment has both TensorFlow and PyTorch installed,- download the original TensorFlow implementation,- download a pre-trained TensorFlow model as indicaded in the TensorFlow implementation readme,- run the script `convert_tf_checkpoint_to_pytorch.py` as indicated in the `README` to convert the pre-trained TensorFlow model to PyTorch.If needed change the relative paths indicated in this notebook (at the beggining of Sections 1 and 2) to point to the relevent models and code. ###Code import os os.chdir('../') ###Output _____no_output_____ ###Markdown 1/ TensorFlow code ###Code original_tf_inplem_dir = "./tensorflow_code/" model_dir = "../google_models/uncased_L-12_H-768_A-12/" vocab_file = model_dir + "vocab.txt" bert_config_file = model_dir + "bert_config.json" init_checkpoint = model_dir + "bert_model.ckpt" input_file = "./samples/input.txt" max_seq_length = 128 import importlib.util import sys spec = importlib.util.spec_from_file_location('*', original_tf_inplem_dir + '/extract_features.py') module = importlib.util.module_from_spec(spec) spec.loader.exec_module(module) sys.modules['extract_features'] = module from extract_features_tensorflow import * layer_indexes = list(range(12)) bert_config = modeling.BertConfig.from_json_file(bert_config_file) tokenizer = tokenization.FullTokenizer( vocab_file=vocab_file, do_lower_case=True) examples = read_examples(input_file) features = convert_examples_to_features( examples=examples, seq_length=max_seq_length, tokenizer=tokenizer) unique_id_to_feature = {} for feature in features: unique_id_to_feature[feature.unique_id] = feature is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2 run_config = tf.contrib.tpu.RunConfig( master=None, tpu_config=tf.contrib.tpu.TPUConfig( num_shards=1, per_host_input_for_training=is_per_host)) model_fn = model_fn_builder( bert_config=bert_config, init_checkpoint=init_checkpoint, layer_indexes=layer_indexes, use_tpu=False, use_one_hot_embeddings=False) # If TPU is not available, this will fall back to normal Estimator on CPU # or GPU. estimator = tf.contrib.tpu.TPUEstimator( use_tpu=False, model_fn=model_fn, config=run_config, predict_batch_size=1) input_fn = input_fn_builder( features=features, seq_length=max_seq_length) tensorflow_all_out = [] for result in estimator.predict(input_fn, yield_single_examples=True): unique_id = int(result["unique_id"]) feature = unique_id_to_feature[unique_id] output_json = collections.OrderedDict() output_json["linex_index"] = unique_id tensorflow_all_out_features = [] # for (i, token) in enumerate(feature.tokens): all_layers = [] for (j, layer_index) in enumerate(layer_indexes): print("extracting layer {}".format(j)) layer_output = result["layer_output_%d" % j] layers = collections.OrderedDict() layers["index"] = layer_index layers["values"] = layer_output all_layers.append(layers) tensorflow_out_features = collections.OrderedDict() tensorflow_out_features["layers"] = all_layers tensorflow_all_out_features.append(tensorflow_out_features) output_json["features"] = tensorflow_all_out_features tensorflow_all_out.append(output_json) print(len(tensorflow_all_out)) print(len(tensorflow_all_out[0])) print(tensorflow_all_out[0].keys()) print("number of tokens", len(tensorflow_all_out[0]['features'])) print("number of layers", len(tensorflow_all_out[0]['features'][0]['layers'])) tensorflow_all_out[0]['features'][0]['layers'][0]['values'].shape tensorflow_outputs = list(tensorflow_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes) ###Output _____no_output_____ ###Markdown 2/ PyTorch code ###Code os.chdir('./examples') import extract_features import pytorch_pretrained_bert as ppb from extract_features import * init_checkpoint_pt = "../../google_models/uncased_L-12_H-768_A-12/" device = torch.device("cpu") model = ppb.BertModel.from_pretrained(init_checkpoint_pt) model.to(device) all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long) all_input_type_ids = torch.tensor([f.input_type_ids for f in features], dtype=torch.long) all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.long) eval_data = TensorDataset(all_input_ids, all_input_mask, all_input_type_ids, all_example_index) eval_sampler = SequentialSampler(eval_data) eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=1) model.eval() layer_indexes = list(range(12)) pytorch_all_out = [] for input_ids, input_mask, input_type_ids, example_indices in eval_dataloader: print(input_ids) print(input_mask) print(example_indices) input_ids = input_ids.to(device) input_mask = input_mask.to(device) all_encoder_layers, _ = model(input_ids, token_type_ids=input_type_ids, attention_mask=input_mask) for b, example_index in enumerate(example_indices): feature = features[example_index.item()] unique_id = int(feature.unique_id) # feature = unique_id_to_feature[unique_id] output_json = collections.OrderedDict() output_json["linex_index"] = unique_id all_out_features = [] # for (i, token) in enumerate(feature.tokens): all_layers = [] for (j, layer_index) in enumerate(layer_indexes): print("layer", j, layer_index) layer_output = all_encoder_layers[int(layer_index)].detach().cpu().numpy() layer_output = layer_output[b] layers = collections.OrderedDict() layers["index"] = layer_index layer_output = layer_output layers["values"] = layer_output if not isinstance(layer_output, (int, float)) else [layer_output] all_layers.append(layers) out_features = collections.OrderedDict() out_features["layers"] = all_layers all_out_features.append(out_features) output_json["features"] = all_out_features pytorch_all_out.append(output_json) print(len(pytorch_all_out)) print(len(pytorch_all_out[0])) print(pytorch_all_out[0].keys()) print("number of tokens", len(pytorch_all_out)) print("number of layers", len(pytorch_all_out[0]['features'][0]['layers'])) print("hidden_size", len(pytorch_all_out[0]['features'][0]['layers'][0]['values'])) pytorch_all_out[0]['features'][0]['layers'][0]['values'].shape pytorch_outputs = list(pytorch_all_out[0]['features'][0]['layers'][t]['values'] for t in layer_indexes) print(pytorch_outputs[0].shape) print(pytorch_outputs[1].shape) print(tensorflow_outputs[0].shape) print(tensorflow_outputs[1].shape) ###Output (128, 768) (128, 768) ###Markdown 3/ Comparing the standard deviation on the last layer of both models ###Code import numpy as np print('shape tensorflow layer, shape pytorch layer, standard deviation') print('\n'.join(list(str((np.array(tensorflow_outputs[i]).shape, np.array(pytorch_outputs[i]).shape, np.sqrt(np.mean((np.array(tensorflow_outputs[i]) - np.array(pytorch_outputs[i]))**2.0)))) for i in range(12)))) ###Output shape tensorflow layer, shape pytorch layer, standard deviation ((128, 768), (128, 768), 1.5258875e-07) ((128, 768), (128, 768), 2.342731e-07) ((128, 768), (128, 768), 2.801949e-07) ((128, 768), (128, 768), 3.5904986e-07) ((128, 768), (128, 768), 4.2842768e-07) ((128, 768), (128, 768), 5.127951e-07) ((128, 768), (128, 768), 6.14668e-07) ((128, 768), (128, 768), 7.063922e-07) ((128, 768), (128, 768), 7.906173e-07) ((128, 768), (128, 768), 8.475192e-07) ((128, 768), (128, 768), 8.975489e-07) ((128, 768), (128, 768), 4.1671223e-07)
Attention- Deep learning demo.ipynb
###Markdown The code is taken from https://www.analyticsvidhya.com/blog/2019/11/comprehensive-guide-attention-mechanism-deep-learning/ ###Code import numpy as np import pandas as pd df = pd.read_csv('yelp_labelled.txt', sep='\t') df.columns = ['review','senti'] df.head(3) from tensorflow.keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from sklearn.model_selection import train_test_split corpus = df['review'] t=Tokenizer() t.fit_on_texts(corpus) text_matrix=t.texts_to_sequences(corpus) # text_matrix --> # [[585, 7, 12, 16], # [12, 151, 2, 1, 428, 4, 46, 429], len_mat=[] for i in range(len(text_matrix)): len_mat.append(len(text_matrix[i])) # len_mat --> # [4, # 8, # 15, text_pad = pad_sequences(text_matrix, maxlen=32, padding='post') ###Output _____no_output_____ ###Markdown Without Attention -- plane LSTM ###Code features = 32 vocab_length = np.unique(text_pad).shape[0] inputs1=Input(shape=(features,)) x1=Embedding(input_dim=vocab_length+1,output_dim=32,\ input_length=features,embeddings_regularizer=keras.regularizers.l2(.001))(inputs1) x1=LSTM(100,dropout=0.3,recurrent_dropout=0.2)(x1) outputs1=Dense(1,activation='sigmoid')(x1) model1=Model(inputs1,outputs1) train_x, test_x, train_y, test_y = train_test_split(text_pad, df['senti'], test_size=0.33, random_state=42) model1.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model1.summary() model1.fit(x=train_x,y=train_y,batch_size=100,epochs=10,verbose=1,shuffle=True,validation_split=0.2) ###Output Epoch 1/10 ###Markdown Now using Attention mechanism custom layers which inherit Layer class should have build(),call (), compute_output_shape() and get_config(). ###Code from IPython.display import Image Image(filename='atten_senti.png') from tensorflow.keras.layers import Layer class attention(Layer): def __init__(self,**kwargs): super(attention,self).__init__(**kwargs) def build(self,input_shape): self.W=self.add_weight(name="att_weight",shape=(input_shape[-1],1),initializer="normal") self.b=self.add_weight(name="att_bias",shape=(input_shape[1],1),initializer="zeros") super(attention, self).build(input_shape) def call(self,x): et=tf.squeeze(tf.tanh(tf.matmul(x,self.W)+self.b),axis=-1) at=tf.nn.softmax(et) at=tf.expand_dims(at,axis=-1) output=x*at return tf.math.reduce_sum(output,axis=1) def compute_output_shape(self,input_shape): return (input_shape[0],input_shape[-1]) def get_config(self): return super(attention,self).get_config() inputs=Input((features,)) x=Embedding(input_dim=vocab_length+1,output_dim=32,input_length=features,\ embeddings_regularizer=keras.regularizers.l2(.001))(inputs) att_in=LSTM(100,return_sequences=True,dropout=0.3,recurrent_dropout=0.2)(x) att_out=attention()(att_in) outputs=Dense(1,activation='sigmoid',trainable=True)(att_out) model=Model(inputs,outputs) model.summary() model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc']) model.fit(x=train_x,y=train_y,batch_size=100,epochs=10,verbose=1,shuffle=True,validation_split=0.2) ###Output Epoch 1/10 6/6 [==============================] - 0s 74ms/step - loss: 0.7432 - acc: 0.4879 - val_loss: 0.7326 - val_acc: 0.4701 Epoch 2/10 6/6 [==============================] - 0s 39ms/step - loss: 0.7268 - acc: 0.4804 - val_loss: 0.7166 - val_acc: 0.5299 Epoch 3/10 6/6 [==============================] - 0s 39ms/step - loss: 0.7143 - acc: 0.5084 - val_loss: 0.7059 - val_acc: 0.5299 Epoch 4/10 6/6 [==============================] - 0s 37ms/step - loss: 0.7052 - acc: 0.5178 - val_loss: 0.6998 - val_acc: 0.5746 Epoch 5/10 6/6 [==============================] - 0s 37ms/step - loss: 0.6982 - acc: 0.5477 - val_loss: 0.6930 - val_acc: 0.5821 Epoch 6/10 6/6 [==============================] - 0s 39ms/step - loss: 0.6874 - acc: 0.6093 - val_loss: 0.6688 - val_acc: 0.6343 Epoch 7/10 6/6 [==============================] - 0s 39ms/step - loss: 0.5973 - acc: 0.7308 - val_loss: 0.6787 - val_acc: 0.6119 Epoch 8/10 6/6 [==============================] - 0s 39ms/step - loss: 0.5015 - acc: 0.7813 - val_loss: 0.7803 - val_acc: 0.6045 Epoch 9/10 6/6 [==============================] - 0s 38ms/step - loss: 0.3667 - acc: 0.8579 - val_loss: 0.7116 - val_acc: 0.6045 Epoch 10/10 6/6 [==============================] - 0s 37ms/step - loss: 0.3220 - acc: 0.8841 - val_loss: 0.6420 - val_acc: 0.6493
C1W2_L1_Huber_Loss.ipynb
###Markdown Ungraded Lab: Huber LossIn this lab, we'll walk through how to create custom loss functions. In particular, we'll code the [Huber Loss](https://en.wikipedia.org/wiki/Huber_loss) and use that in training the model. Imports ###Code try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf import numpy as np from tensorflow import keras ###Output _____no_output_____ ###Markdown Prepare the DataOur dummy dataset is just a pair of arrays `xs` and `ys` defined by the relationship $y = 2x - 1$. `xs` are the inputs while `ys` are the labels. ###Code # inputs xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float) # labels ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float) ###Output _____no_output_____ ###Markdown Training the modelLet's build a simple model and train using a built-in loss function like the `mean_squared_error`. ###Code model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) model.compile(optimizer='sgd', loss='mean_squared_error') model.fit(xs, ys, epochs=500,verbose=0) print(model.predict([10.0])) ###Output [[18.980743]] ###Markdown Custom LossNow let's see how we can use a custom loss. We first define a function that accepts the ground truth labels (`y_true`) and model predictions (`y_pred`) as parameters. We then compute and return the loss value in the function definition. ###Code def my_huber_loss(y_true, y_pred): threshold = 1 error = y_true - y_pred is_small_error = tf.abs(error) <= threshold small_error_loss = tf.square(error) / 2 big_error_loss = threshold * (tf.abs(error) - (0.5 * threshold)) return tf.where(is_small_error, small_error_loss, big_error_loss) ###Output _____no_output_____ ###Markdown Using the loss function is as simple as specifying the loss function in the `loss` argument of `model.compile()`. ###Code model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) model.compile(optimizer='sgd', loss=my_huber_loss) model.fit(xs, ys, epochs=500,verbose=0) print(model.predict([10.0])) ###Output [[18.794144]]
sandbox3/A.ipynb
###Markdown [ベイズ最適化入門](https://qiita.com/masasora/items/cc2f10cb79f8c0a6bbaa) https://github.com/Ma-sa-ue/practice/blob/master/machine%20learning(python)/bayeisan_optimization.ipynb The original code is based on python2.A few modifications to fit it to python3 are needed. ###Code import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm import sys np.random.seed(seed=123) %matplotlib inline #### kernel def my_kernel(xn,xm,a1=200.0,a2=0.1,a3=1.0,a4=10.0): return a1*np.exp(-a2*0.5*(xn - xm)**2) ### Gaussian process def pred(_x_sample,_y_sample, _x): K = np.zeros([len(_x_sample),len(_x_sample)]) for i in range(len(_x_sample)): for j in range(len(_x_sample)): K[i,j] = my_kernel(_x_sample[i], _x_sample[j]) mu = 0.0*_x var = 0.0*_x aux = 0.0*_x_sample for i in range(len(_x)): ### gram matrix for j in range(len(_x_sample)): aux[j] = my_kernel(_x_sample[j], _x[i]) mu[i] = np.dot(aux,np.dot(np.linalg.inv(K),_y_sample)) var[i] = my_kernel(_x[i],_x[i]) - np.dot(aux, np.dot(np.linalg.inv(K + np.identity(len(_x_sample))), aux)) return mu, var def xx2K(xn,xm,a1=200.0,a2=100.0,a3=1.0,a4=10.0): if (len(xn) == 1): K = np.zeros([1,len(xm)]) else : K = 0.0*np.outer(xn,xm) for i in range(len(xn)): K[i,:] = a1*np.exp(-(xn[i] - xm[:])**2/a2) return K def _xsample2meanvariance(_xsample, _ysample, _x): eps = 1.0e-10 K = xx2K(_xsample,_xsample) + eps*np.eye(len(_xsample)) L = np.linalg.cholesky(K) #plt.matshow(K) #plt.matshow(L) kast = xx2K(_xsample,_x) kastast = xx2K(_x,_x) w = np.linalg.solve(L, _ysample) z = np.linalg.solve(L.T, w) fmean = np.dot(kast.T, z) W = np.linalg.solve(L, kast) Z = np.linalg.solve(L.T, W) fvariance = kastast - np.dot(kast.T, Z) fvariance = np.diag(fvariance) std = np.sqrt(fvariance) return fmean, fvariance def generate_sample(x): return 40.0*np.sin(x/1.0) - (0.3*(x+6.0))**2 - (0.2*(x-4.0))**2 - 1.0*np.abs(x+2.0) + np.random.normal(0,1,1) x = np.linspace(-20,20,1000) z = list(map(generate_sample,x)) #for python3 #z = generate_sample(x) plt.plot(x, z) #### plot true data plt.show() #sys.exit() #### PI def aqui2(_mean, _var, _maxval): _lamb = (_mean - _maxval- 0.01)/(var*1.0) _z = norm.cdf(_lamb) return _z #### EI def aqui1(_mean, _var, _maxval): _lamb = (_mean - _maxval)/(_var*1.0) _z = (_mean - maxval)*norm.cdf(_lamb) + _var*norm.pdf(_lamb) return _z #### UCB def aqui3(_mean, _var, _maxval): return _mean+1.0*_var x_sample = np.array([]) y_sample = np.array([]) x_point = np.random.uniform(-20,20) epoch=15 maxval = 'Initial' plt.figure(figsize=(20, 50)) for i in range(epoch): if x_point not in x_sample: x_sample = np.append(x_sample,x_point) print ("x_point, maxval = "+str(x_point)+', '+str(maxval)) y_point = generate_sample(x_point) y_sample = np.append(y_sample,y_point) #mean, var = pred(x_sample, y_sample, x) mean, var = _xsample2meanvariance(x_sample, y_sample, x) maxval = max(y_sample) accui = aqui3(mean, var, maxval) ###change this function #accui = aqui2(mean, var, maxval) ###change this function #accui = aqui1(mean, var, maxval) ###change this function # x_point = x[maximum(accui)]+np.random.normal(0,0.01,1) x_point = x[np.argmax(accui)]+np.random.normal(0,0.01,1) if(i%1==0): plt.subplot(epoch*2,2,i*2+1) plt.plot(x,np.array(mean),color="red",label="mean") plt.plot(x,z,color="yellow") high_bound = mean+ 1.0*var lower_bound = mean- 1.0*var plt.fill_between(x,high_bound,lower_bound,color="green",label="confidence") plt.xlim(-20,20) plt.ylim(-100,100) plt.scatter(x_sample,y_sample) plt.subplot(epoch*2,2,i*2+2) plt.plot(x,accui) plt.savefig("bayes_UCB.png")### change the name #plt.legend() plt.show() #print "finish" print ("finish") ###Output x_point, maxval = 9.68836939984255, Initial x_point, maxval = [-20.00614447], -46.090564309299886 x_point, maxval = [-4.5379972], -46.090564309299886 x_point, maxval = [20.00232789], 33.648771810759406 x_point, maxval = [1.34292799], 33.648771810759406 x_point, maxval = [-1.94623074], 33.648771810759406 x_point, maxval = [-11.51707391], 33.648771810759406 x_point, maxval = [5.27453701], 33.648771810759406 x_point, maxval = [15.38886482], 33.648771810759406 x_point, maxval = [-8.46147698], 33.648771810759406 x_point, maxval = [-16.67628755], 33.648771810759406 x_point, maxval = [12.83469869], 33.648771810759406 x_point, maxval = [-14.35591709], 33.648771810759406 x_point, maxval = [18.48196994], 33.648771810759406 x_point, maxval = [-18.7178149], 33.648771810759406
notebooks_varios/Funciones para ejercicios de guias 1-3.ipynb
###Markdown Guía 1 ###Code def coordenadas_de_un_vector_respecto_a_una_base_B(v: Matrix, coordenadas: Dict[Symbol, Matrix]): ''' Devuelve las coordenadas de un vector respecto de una base B = {v1, v2, ..., vn} v: Es un vector escrito con símbolos (v1, v2, etc) coordenadas: es un diccionario que indica cómo traducir cada vector en coordenadas de la base B Ejemplo de un ejercicio completo: v1, v2, v3 = symbols('v1 v2 v3') B = [v1, v2, v3] coordenadas = { v1: Matrix([1, 0, 0]), v2: Matrix([0, 1, 0]), v3: Matrix([0, 0, 1]), } w1 = 0*v1 + 0*v2 + 0*v3 w2 = v1 - v2 w3 = 2*v2 - v3 w2_B = coordenadas_de_un_vector_respecto_a_una_base_B(w2, coordenadas) w3_B = coordenadas_de_un_vector_respecto_a_una_base_B(w3, coordenadas) G_B = Matrix([[60, 30, 20], [30, 20, 15], [20, 15, 12]]) G = matriz_de_gram(producto_interno_por_definicion, G_B, w2_B, w3_B) area_de_un_triangulo(G) ''' w = v for vector in coordenadas: w = w.subs(vector, coordenadas[vector]) return w def encontrar_a(B: List[Matrix], a: Symbol): ''' MATRICES CUADRADAS Halla el o los valores que puede tener a para que la base dada sea una base efectivamente (para que los vectores sean linealmente independientes) Ejemplo: a = Symbol('a') B = [Matrix([a, 1, 2]), Matrix([3, 2, 3]), Matrix([1, -a, 1])] encontrar_a(B, a) ''' return solve(det(Matrix.hstack(*B)), a) def algoritmo_espacio_columna(S: List[Matrix]): ''' Encuentra el espacio columna de un subespacio Devuelve una tupla con una lista de los vectores que generan el espacio columna y los índices de los pivotes. S: es una lista de los vectores que generan al subespacio S ''' B_S = Matrix.hstack(*S) _, pivotes_S = B_S.rref() S_li = [] for pivote in pivotes_S: S_li.append(S[pivote]) return S_li, pivotes_S def dos_subespacios_generan_el_mismo_subespacio(S1: List[Matrix], S2: List[Matrix]): ''' Verifica si dos subespacios son el mismo subespacio devolviendo True para ese caso y False en caso contrario. Ejemplo: S1 = [Matrix([-4, -5, -3]), Matrix([26, 41, 23])] bien = [Matrix([11, 18, 10]), Matrix([15, 23, 13])] mal_1 = [Matrix([-3, 10, 8]), Matrix([-6, 11, 10])] mal_2 = [Matrix([7, 12, 2]), Matrix([9, 14, 1])] mal_3 = [Matrix([7, 2, 12]), Matrix([9, 1, 14]), 2 * Matrix([9, 1, 14])] dos_subespacios_generan_el_mismo_subespacio(S1, bien), dos_subespacios_generan_el_mismo_subespacio(S1, mal_1), dos_subespacios_generan_el_mismo_subespacio(S1, mal_2), dos_subespacios_generan_el_mismo_subespacio(S1, mal_3) ''' S1_li, pivotes_S1 = algoritmo_espacio_columna(S1) S2_li, pivotes_S2 = algoritmo_espacio_columna(S2) B_S1_li = Matrix.hstack(*S1_li) B_S2_li = Matrix.hstack(*S2_li) B = Matrix.hstack(B_S1_li, B_S2_li) _, pivotes_B = B.rref() return len(pivotes_B) == len(pivotes_S1) and len(pivotes_B) == len(pivotes_S2) ''' OBTENER LOS 4 FANTÁSTICOS A.nullspace() -> devuelve el espacio nulo de A A.columnspace() -> devuelve el espacio columna de A A.T.columnspace() -> devuelve el espacio columna de A traspuesta A.T.nullspace() -> devuelve el espacio nulo de A de a traspuesta ''' #Función que calcula el Wronskiano #(existe wronskian() directamente de sympy, pero no devuelve la matriz wronskiana, sino sólo el valor del wronskiano) def Wronskiano(B, x, return_matrix = True): """ Calcula el Wronskiano del conjunto de funciones contenidas en B en función de la variable x. Args: B (lista de funciones): El conjunto de funciones sobre el cual calcular el Wronskiano en función de la variable x. x (objeto tipo Symbol): Variable independiente. return_matrix (bool, opcional): Si es cierto, devuelve la matriz Wronskiana además del Wronskiano. Returns: w (función): Wronskiano del conjunto B. W (objeto tipo Matrix, opcional): Si return_matrix es True, devuelve la matriz wronskiana. Ejemplo: #Para ver si un conjunto es linealmente independiente en su correspondiente espacio vectorial, podemos emplear el Wronskiano. x = Symbol('x') # Definición de la variable simbólica F = [1, sinh(x), cosh(x)] #Conjunto de funciones W, M_W = Wronskiano(F,x,return_matrix=True) #W es el Wronskiano, M_W es la matriz wronskiana print('La matriz wronskiana es: '); display(M_W) print('y el Wronskiano resulta: '); display(W) #W.subs(x,0) #Si quisiera evaluar el Wronskiano en algún valor de x, por ejemplo en este caso x=0 """ n = len(B) # Cantidad de vectores # Calculo la matrix wronskiana W = zeros(n,n) for i in range(n): for j, f in enumerate(B): W[i,j] = diff(f,x,i) if return_matrix: return W.det(), W return W.det() ###Output _____no_output_____ ###Markdown Guía 2 ###Code def matriz_de_transformacion_lineal(B: List[Matrix], img_B: List[Matrix]): ''' Halla la matriz de una transformación lineal T Si T(x) = A * x, esta función encuentra A dada una base B y la imágen por T de esa base B, es decir T(B). B: es una lista de vectores que conforman la base B img_B: es una lista de vectores que corresponden a las imágenes de los vectores de la base B Ejemplo: B = Matrix([[2, 2, 1], [1, -2, 2], [-2, 1, 2]]) img_B = Matrix([[0, -1, -1], [1, 0, -1], [1, 0, -1], [1, 1, 0]]) y = Matrix([2, 5, 5, 3]) ''' return Matrix.hstack(*img_B) * Matrix.hstack(*B).inv() def imagen_por_T_de_un_subespacio(B: List[Matrix], img_B: List[Matrix], S: List[Matrix]): ''' Halla la imágen por T de un subespacio S dada una base B y la imágen por T de B B: es una lista de vectores que conforman la base B img_B: es una lista de vectores que corresponden a las imágenes de los vectores de la base B S: es una lista de vectores que generan el subespacio S ''' img_S = matriz_de_transformacion_lineal(B, img_B) * Matrix.hstack(*S) return img_S def preimagen_subespacio(B,img_B,Au): """ Dada una transformación lineal definida por una base B y las imágenes img_B de cada uno de los vectores de B, la función devuelve una base del subespacio T^(-1)(U), donde U = Nul(Au). """ B_matrix = Matrix.hstack(*B) Eb, pivots = B_matrix.rref(pivots=True) img_B_matrix = Matrix.hstack(*img_B) sol = (Au * img_B_matrix).nullspace() B_T_inv_U = [B_matrix * v for v in sol] return B_T_inv_U # TODO: EN CONSTRUCCIÓN def nucleo_de_operador_diferencial(L): ''' ''' x = Symbol('x') Nu_L = set() raices = str(L).split('*') for raiz in raices: try: l = -int(raiz[raiz.index('-')+1]) except: l = int(raiz[raiz.index('+')+1]) if '**' in raiz: k = int(raiz.split('**')[1]) else: k = 1 for i in range(k): Nu_L.add((x**(i))*exp(-l*x)) return Nu_L def solucion_particular(Nu_L, Nu_A): Nu_AL = Nu_L | Nu_A yp_expr = '' for i, e in enumerate(Nu_AL - Nu_L): yp_expr += f'a{i+1}*{e}+' return yp_expr[:-1] D, I = symbols('D I') L = (D-2*I)*(D-4*I)*(D+3*I)**2 Nu_L = nucleo_de_operador_diferencial(L) Nu_A = nucleo_de_operador_diferencial('(D+3I)^6') solucion_particular(Nu_L, Nu_A) ###Output _____no_output_____ ###Markdown Guía 3 Productos Internos ###Code ''' Para los productos internos en general A: es una matriz B: es una matriz armada con los vectores que conforman una base B x, y: son vectores G: es la matriz de Gram ''' def producto_interno_matrices_3_x_3(A: Matrix, B: Matrix, G: Matrix=None): return 1/2 * (B.T * A).trace() def producto_interno_por_definicion(x: Matrix, y: Matrix, G: Matrix): return (y.T * G * x)[0] def producto_interno_polinomios_integral(p, q, x, limite_inferior=0, limite_superior=1, a=1): return integrate(p * q * a, (x, limite_inferior, limite_superior)) def producto_interno_canonico(x, y, G): return (y.T * x)[0] ###Output _____no_output_____ ###Markdown Matriz de Gram ###Code def matriz_de_gram(producto_interno, G_B, *vectores_ordenados): ''' Dado un producto interno, una matriz de Gram que define ese producto interno (si es de matrices enviar cualquier cosa) y una lista de vectores ordenados devuelve la matriz de Gram de esos vectores producto_interno: es el nombre de una función definida G_B: es la matriz de Gram del producto interno *vectores_ordenados: es una lista de vectores que se tienen que mandar ordenados Ejemplo: ''' G = eye(len(vectores_ordenados)) for i in range(len(vectores_ordenados)): for j, vector in enumerate(vectores_ordenados): G[i,j] = producto_interno(vectores_ordenados[i], vectores_ordenados[j], G_B) return G ###Output _____no_output_____ ###Markdown Norma, Ángulo y Distancia ###Code def norma(x, producto_interno, G): ''' Dado un vector, una matriz de producto interno y un producto interno, se computa la norma de dicho vector. x, y: son vectores producto_interno: es el nombre de una función definida G_B: es la matriz de Gram del producto interno ''' return sqrt(producto_interno(x, x, G)) def angulo(x: Matrix, y: Matrix, G: Matrix, producto_interno): ''' Dados dos vectores, una matriz de producto interno y un producto interno, se computa el ángulo entre dichos vectores. x, y: son vectores producto_interno: es el nombre de una función definida G_B: es la matriz de Gram del producto interno ''' return acos(producto_interno(x, y, G) / (norma(x, producto_interno, G) * norma(y, producto_interno, G))) def distancia_de_un_vector_a_un_subespacio(v, B, G, producto_interno): '''AGARRAR CON PINZAS''' _v_tilda = v_tilda(v, B, eye(4), producto_interno) return sqrt(producto_interno(v, v, G) - (_v_tilda.T * G.inv() * _v_tilda)[0]) ###Output _____no_output_____ ###Markdown Áreas (triángulo y paralelogramo) ###Code def area_de_un_paralelogramo(G: Matrix): ''' Computa el área de un paralelogramo dada una matriz de Gram de los vértices de dicho triángulo G: es la matriz de Gram de los vectores que forman el paralelogramo cuya área se quiere calcular ''' return G.det() def area_de_un_triangulo(G: Matrix): ''' Computa el área de un triángulo dada una matriz de Gram de los vértices de dicho triángulo G: es la matriz de Gram de los vectores que forman el triángulo cuya área se quiere calcular ''' return 1/2 * sqrt(area_de_un_paralelogramo(G)) def resolver_area_de_un_triangulo_en_el_origen(G_B: Matrix, v1: Matrix, v2: Matrix, producto_interno): ''' Calcula el área del triángulo de vértices v1 y v2 (siendo nunguno el vector nulo) Ejemplo: v1 = Matrix([1, 1, 0]) v2 = Matrix([1, 0, 1]) G_B = Matrix([[1, 1, 1], [1, 2, 2], [1, 2, 3]]) resolver_area_de_un_triangulo_con_uno_de_los_vertices_en_el_origen(G_B, v1, v2, producto_interno_por_definicion) ''' G_v1_v2 = matriz_de_gram(producto_interno, G_B, v1, v2) return area_de_un_triangulo(G_v1_v2) def resolver_area_de_un_triangulo_corrido_del_origen(G_B: Matrix, v1: Matrix, v2: Matrix, v3: Matrix, producto_interno): ''' Calcula el área del triángulo de vértices v1, v2, y v3 (siendo nunguno el vector nulo) Ejemplo: v1 = Matrix([3, 1, 2]) v2 = Matrix([4, 2, 2]) v3 = Matrix([4, 1, 3]) G_B = Matrix([[1, 1, 1], [1, 2, 2], [1, 2, 3]]) resolver_area_de_un_triangulo_con_uno_de_los_vertices_en_el_origen(G_B, v1, v2, v3, producto_interno_por_definicion) ''' w1, w2, w3 = v1-v1, v2-v1, v3-v1 G_w2_w3 = matriz_de_gram(producto_interno, G_B, w2, w3) return area_de_un_triangulo(G_v1_v2) ###Output _____no_output_____ ###Markdown Proyecciones y Simetrías ###Code def v_tilda(x: Matrix, B: List[Matrix], G: Matrix, producto_interno): return Matrix([producto_interno(x, B[i], G) for i in range(len(B))]) def proyeccion_ortogonal_de_un_vector_a_un_subespacio(x: Matrix, B: list, G: Matrix, producto_interno): return G.inv() * v_tilda(x, B, G, producto_interno) def matriz_de_proyeccion_sobre_S1_en_direccion_de_S2_en_coordenadas_canonicas(S1: List[Matrix], S2: List[Matrix]): ''' Arma la matriz de proyección sobre un subespacio S1 en dirección de otro subespacio S2 en coordenadas canónicas dadas dos listas con vectores generadores de cada subespacio en particular. S1: lista de vectores que generan al subespacio S1 (sobre el que se quiere proyectar) S1: lista de vectores que generan al subespacio S2 (la dirección) ''' B_S1 = Matrix.hstack(*S1) B_S2 = Matrix.hstack(*S2) B = Matrix.hstack(B_S1, B_S2) P_BB = eye(B.rank()) for i in range(B_S1.rank(), B_S1.rank()-1, -1): P_BB[i, i] = 0 return B * P_BB * B.inv() def matriz_de_simetria_sobre_S1_en_direccion_de_S2_en_coordenadas_canonicas(S1: List[Matrix], S2: List[Matrix]): ''' Arma la matriz de simetríá sobre un subespacio S1 en dirección de otro subespacio S2 en coordenadas canónicas dadas dos listas con vectores generadores de cada subespacio en particular. S1: lista de vectores que generan al subespacio S1 (sobre el que se quiere hacer la simetría) S1: lista de vectores que generan al subespacio S2 (la dirección) ''' B_S1 = Matrix.hstack(*S1) B_S2 = Matrix.hstack(*S2) B = Matrix.hstack(B_S1, B_S2) S_BB = eye(B.rank()) for i in range(B_S1.rank(), B_S1.rank()-1, -1): S_BB[i, i] = -1 return B * S_BB * B.inv() ###Output _____no_output_____ ###Markdown QR ###Code ''' Q, R = A.QRDecomposition() ''' ###Output _____no_output_____
02-Was-kann-ein-Jupyter-Notebook/Was kann ein Jupyter Notebook.ipynb
###Markdown Was kann ein Jupyter Notebook?In diesem Jupyter Notebook werden ein paar Features von Jupyter Notebooks präsentiert.Jedes Dokument (wie auch dieses) besteht aus Zellen.Die zwei typischen Zell-Arten sind `Markdown` und `Code`.Diese werden im Folgenden vorgestellt. Dies ist eine `Markdown-Zelle`.Hier kann Markdown-Code stehen.Der Feature-Umfang wird in diesem[Handbuch-Eintrag von Jupyter Notebook](https://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Working%20With%20Markdown%20Cells.html)erklärt.Klicken Sie doppelt auf diese Zelle, um in den Editier-Modus zu wechseln.Sobald Sie fertig sind, klicken Sie auf `▶`, um in den Darstellungsmodus zurück zu wechseln. Statt Markdown kann so eine Zelle auchHTML-Code enthalten und es gibt eine Unterstützung für $\LaTeX$:So können Formeln wie$\frac{n!}{k!(n-k)!} = \binom{n}{k}$geschrieben werden. Hier verwende ich Markdown, um ein Bild von einer externen Webseite einzubetten![](https://imgs.xkcd.com/comics/common_cold_2x.png) Markdown und HTML kommen an manchen Stellen an ihre Grenzen, z. B. stoßen IFrames manchmal an die Grenzen des Machbaren.Allerdings kann man auch über eine Code-Zelle dieses Problem lösen. ###Code # Youtube-Videos sind auch möglich from IPython.display import YouTubeVideo YouTubeVideo("ctOM-Gza04Y", 560, 315) # Hier steht valider Python-Code 4 + 2 # Im Code können Visualisierungen erstellt werden # Falls Sie beim ersten Mal einen Fehler sehen, führen Sie diese Zelle einfach noch mal aus import matplotlib.pyplot as plt data = { 'apples': 10, 'oranges': 15, 'lemons': 5, 'limes': 20 } names = list(data.keys()) values = list(data.values()) plt.bar(names, values) plt.show() # Fehlermeldungen werden ebenfalls direkt am Code angezeigt. 3 / 0 ###Output _____no_output_____ ###Markdown Jupyter-Notebook-spezifische Funktionalitäten Wir können ebenfalls einsehen, was wir bislang in welcher Reihenfolge ausgeführt haben.Dabei werden nur Code-Zellen berücksichtigt.Eine ausführliche Einleitung gibt es unter[Python Data Science Handbook](https://jakevdp.github.io/PythonDataScienceHandbook/01.04-input-output-history.html). ###Code for i, in_i in enumerate(In): # In der Liste "In" werden alle Inputs gespeichert print(f"In[{i}]:\n{in_i}") # Betrachten Sie im Jupyter Notebook oben die In[x]-Bezeichner, das sollte übereinstimmen print("-"*80) # Erzeuge eine visuelle Trennung der Eingaben ###Output _____no_output_____ ###Markdown Die Zellen eines Jupyter Notebooks werden immer in der Reihenfolge ausgeführt, wie Sie sie ausführen.Es werden *nie* Zellen für Sie im Hintergrund ausgeführt.Sie können also Zellen in einer beliebigen Reihenfolge ausführen - also hypothetisch gesprochen auch von der untersten Zelle eines Dokuments rückwärts bis zur obersten Zelle.Vermutlich passt dies aber nicht zur intendierten Reihenfolge des Autors.Achten Sie besonders auf die Reihenfolge der Ausführung, wenn Sie Objekte modifizieren oder Variablen überschreiben, die in mehreren Zellen vorkommen.Dann führt das wiederholte Ausführen einer Zelle u. U. zu Fehlern oder einem unerwünschten Verhalten.Hier ein kurzes Beispiel: ###Code # Führen Sie zuerst diese Zelle aus x = 7 # Überspringen Sie zunächst diese Zelle x + 5 # Führen Sie als zweites diese Zeile aus x = None # Nun führen Sie erst die Zelle in der Mitte aus ###Output _____no_output_____
Per-Transaction Models.ipynb
###Markdown Computing Per-Transaction Responsibility for EthereumEthereum miners use energy that causes emissions. The amount of emissions varies with respect to energy sources, and the about of energy varies mainly as a function of time with respect to GPU efficiency. To simplify this discussion, we will work with hashrate (hashes per second) and hashcount (hashes) instead of energy or emissions.We assume:* It is more meaningful to allocate hashcount responsibility across all miners, or all transactions, than across both simultaneously. Here we allocate based on transactions, because the goal is to create accountability at the point of use.* The hashrate does not vary directly based on the properties of the transactions. This means allocating a hashcount to each transaction is a philosophical question rather than a technical question.* The sum of allocated usage should add up to the total usage. This means the sum of transaction hashcounts should add up to the total Ethereum hashcount. Or that the sum of all allocated emissions should add up to the total emissions.* Transactions contribute to the hashrate in proportion to how much the transaction incentivizes miners.Miners are incentivized by payments in Ethereum, and by speculation about the future value of Ethereum. Miners are paid in two ways:1. Varying transaction fees paid by users when making a transaction, recently around 2ETH total.2. A fixed per-block reward at 2ETH (which was previously 3ETH and 5ETH), with additional nuances for uncle blocks.There are a few big questions to keep in mind:- Should responsibility be allocated based on first order effects (like payment of transaction fees) or second order effects (like increasing the hashrate)? If the hashrate of Ethereum was fixed, how could we allocate responsibility?- Should responsibility be assigned as a portion of a block's energy usage (assuming each block is the same throughout a day), a portion of one day, or as a portion of a longer timeline like the entire history of Ethereum? Should a transaction years ago also be considered partially responsible for energy usage today?- Should the block rewards be ignored, or should they factor in to the model evenly across all transactions, based on gas usage, or something else?- When comparing over longer timelines, how can we account for the fact that a higher price for Ethereum provides a higher incentive to miners?The following models explore the effects of different answers to these questions. PreparationFirst we download some daily statistics about Ethereum, and load them into dictionaries. ###Code !wget --quiet -O "data/statistics.csv" "https://docs.google.com/spreadsheets/d/e/2PACX-1vRigeH5xVkOZNOtBM6blHfrPXNqyWrtS7fVzOUm_d9fbNtTyTCBwAnbBPdQ76q6-2p0xoNcy7VxhXnZ/pub?output=csv" import pandas as pd class EthereumHistory: def __init__(self, fn='data/statistics.csv'): df = pd.read_csv(fn) dates = [e.date() for e in pd.to_datetime(df['Date'])] self.dates = dates self.tx_count = dict(zip(dates, df['Tx count'])) self.tx_fees = dict(zip(dates, df['Tx fees (ETH)'])) self.block_count = dict(zip(dates, df['Block count'])) self.block_rewards = dict(zip(dates, df['Block rewards (ETH)'])) self.gas_used = dict(zip(dates, df['Gas used'])) self.price = dict(zip(dates, df['Price (USD)'])) self.hashcount = dict(zip(dates, df['Hashcount (EH/day)'])) self.tx_count_total = sum(self.tx_count.values()) self.tx_fees_total = sum(self.tx_fees.values()) self.block_count_total = sum(self.block_count.values()) self.block_rewards_total = sum(self.block_rewards.values()) self.gas_used_total = sum(self.gas_used.values()) self.price_total = sum(self.price.values()) self.hashcount_total = sum(self.hashcount.values()) history = EthereumHistory() ###Output _____no_output_____ ###Markdown Some helper functions for simplifying the code behind different models. ###Code from etherscan import Etherscan, etherscan_gas_fees, etherscan_gas_used, etherscan_timestamp from nifty_gateway import list_nifty_gateway from utils import load_contracts def get_tx_date(tx): return etherscan_timestamp(tx).date() def get_tx_fees(tx): return etherscan_gas_fees(tx) / 1e18 # convert wei to eth def get_tx_gas(tx): return etherscan_gas_used(tx) ###Output _____no_output_____ ###Markdown Simple modelsInitial discussion around allocating energy and emissions was focused on [per-transaction allocation](https://digiconomist.net/ethereum-energy-consumption):$hashcount_{date} \times \dfrac{1}{txcount_{date}}$Or [gas-weighted allocation](https://memoakten.medium.com/the-unreasonable-ecological-cost-of-cryptoart-2221d3eb2053):$hashcount_{date} \times \dfrac{gas_{tx}}{gas_{date}}$We repeat that here. ###Code def model_tx_count(tx): date = get_tx_date(tx) return history.hashcount[date] / history.tx_count[date] def model_gas(tx): date = get_tx_date(tx) return history.hashcount[date] * get_tx_gas(tx) / history.gas_used[date] ###Output _____no_output_____ ###Markdown FeesOnly modelThis is the simplest interpretation of the idea that "users should be held accountable for a service proportional to how much they pay for that service", because the payment is what rewards and therefore incentivizes the service to continue running. This is similar to the gas-based model, adding gas price to the mix.$hashcount_{date}\times\dfrac{fees_{tx}}{fees_{date}}$ ###Code def model_fees_only(tx): tx_date = get_tx_date(tx) fee_ratio = get_tx_fees(tx) / history.tx_fees[tx_date] return history.hashcount[tx_date] * fee_ratio ###Output _____no_output_____ ###Markdown HistoricalFees modelThe approach of the FeesOnly model looks at each transaction in the context of the day on which it was made. This time envelope could be narrowed or expanded. It could be narrowed to the block on which it was made (which should be similar to FeesOnly with some added noise). Or it could be expanded to include the entire history of Ethereum. Expanding would account for the fact that Ethereum transactions do not happen in a vacuum, but they build on all previous transactions, and lay a foundation for future transactions.One challenge of this model: until Ethereum docks with Ethereum 2, these numbers will changes over time as the `history.hashcount_total` and `history.fees_total` changes.$hashcount_{total}\times\dfrac{fees_{tx}}{fees_{total}}$ ###Code def model_historical_fees(tx): fee_ratio = get_tx_fees(tx) / history.tx_fees_total return history.hashcount_total * fee_ratio ###Output _____no_output_____ ###Markdown HistoricalFeesTimesPrice modelBecause the price of Ethereum varies from day to day, a transaction fee on a high-priced day pays more to miners in fiat equivalent than the same transaction fee on a low-priced day.$hashcount_{total}\times\dfrac{fees_{tx} \times price_{date}}{\sum_{d}^{history} fees_{d} \times price_{d}}$ ###Code # compute the fee volume across all dates all_fees_price = sum([history.tx_fees[date] * history.price[date] for date in history.dates]) def model_historical_fees_price(tx): tx_date = get_tx_date(tx) tx_fees_price = get_tx_fees(tx) * history.price[tx_date] fees_price_ratio = tx_fees_price / all_fees_price return history.hashcount_total * fees_price_ratio ###Output _____no_output_____ ###Markdown FeesAndBlock modelIt could be argued that miners are only half-incentivized by transaction fees, because transaction fees only make up about half of their income, with the other half coming from block rewards.Consider two blocks with two transactions each:1. Block 1 has transaction A paying 200 gwei and transaction B paying 100 gwei2. Block 2 has transaction A paying 2 ETH and transaction B paying 1 ETHAlthough the ratio between transactions A and B are similar in each block, the miner is arguably disproportionately more incentivized by transaction A over transaction B in block 2 compared to block 1. If the block reward is 2 ETH, and we assume that every transaction is equally responsible for the block reward, then we could argue that the weighting for the transactions in each block is as follows:* block 1 tx A = (200 gwei + 2 ETH / 2) / (200 gwei + 100 gwei + 2 ETH) ~= 1/2* block 1 tx B = (100 gwei + 2 ETH / 2) / (200 gwei + 100 gwei + 2 ETH) ~= 1/2* block 2 tx A = (2 ETH + 2 ETH / 2) / (2 ETH + 1 ETH + 2 ETH) = 4/6* block 2 tx B = (1 ETH + 2 ETH / 2) / (2 ETH + 1 ETH + 2 ETH) = 2/6Like other models, this model operates on a per-day basis, so the block rewards include uncle block rewards.$hashcount_{date}\times\dfrac{fees_{tx} + blockrewards_{date} \times \dfrac{1}{txcount_{date}}}{fees_{date} + blockrewards_{date}}$ ###Code def model_fees_block(tx): tx_date = get_tx_date(tx) block_rewards = history.block_rewards[tx_date] block_rewards_portion = block_rewards / history.tx_count[tx_date] tx_fees_and_block = get_tx_fees(tx) + block_rewards_portion all_fees_and_block = history.tx_fees[tx_date] + block_rewards fees_and_block_ratio = tx_fees_and_block / all_fees_and_block return history.hashcount[tx_date] * fees_and_block_ratio ###Output _____no_output_____ ###Markdown FeesAndBlockTimesGas modelInstead of assuming every transaction is equally responsible for the block reward, we can use gas as a weighting factor. This makes sense because high-gas transactions effectively "push out" low-gas transactions. If one transaction used 12,000,000 gas, then it uses part of the block that could be otherwised used for 120x transactions that use 100,000 gas.$hashcount_{date}\times\dfrac{fees_{tx} + blockrewards_{date} \times \dfrac{gas_{tx}}{gas_{date}}}{fees_{date} + blockrewards_{date}}$ ###Code def model_fees_block_gas(tx): tx_date = get_tx_date(tx) block_rewards = history.block_rewards[tx_date] tx_gas_ratio = get_tx_gas(tx) / history.gas_used[tx_date] block_rewards_portion = block_rewards * tx_gas_ratio tx_fees_and_block = get_tx_fees(tx) + block_rewards_portion all_fees_and_block = history.tx_fees[tx_date] + block_rewards fees_and_block_ratio = tx_fees_and_block / all_fees_and_block return history.hashcount[tx_date] * fees_and_block_ratio ###Output _____no_output_____ ###Markdown HistoricalFeesAndBlock modelThis opens the "time envelope" of the FeesAndBlock approach to cover all of Ethereum's history.$hashcount_{total}\times\dfrac{fees_{tx} + blockrewards_{total} \times \dfrac{1}{txcount_{total}}}{fees_{total} + blockrewards_{total}}$ ###Code def model_historical_fees_block(tx): block_rewards = history.block_rewards_total block_rewards_portion = block_rewards / history.tx_count_total tx_fees_and_block = get_tx_fees(tx) + block_rewards_portion all_fees_and_block = history.tx_fees_total + block_rewards fees_and_block_ratio = tx_fees_and_block / all_fees_and_block return history.hashcount_total * fees_and_block_ratio ###Output _____no_output_____ ###Markdown HistoricalFeesAndBlockTimesGas modelThis adds gas weighting to the HistoricalFeesAndBlock model.$hashcount_{total}\times\dfrac{fees_{tx} + blockrewards_{total} \times \dfrac{gas_{tx}}{gas_{total}}}{fees_{total} + blockrewards_{total}}$ ###Code def model_historical_fees_block_gas(tx): block_rewards = history.block_rewards_total tx_gas_ratio = get_tx_gas(tx) / history.gas_used_total block_rewards_portion = block_rewards * tx_gas_ratio tx_fees_and_block = get_tx_fees(tx) + block_rewards_portion all_fees_and_block = history.tx_fees_total + block_rewards fees_and_block_ratio = tx_fees_and_block / all_fees_and_block return history.hashcount_total * fees_and_block_ratio ###Output _____no_output_____ ###Markdown HistoricalFeesAndBlockTimesPrice modelThis weights HistoricalFeesAndBlock by the price of Ethereum.$hashcount_{total}\times\dfrac{(fees_{tx} + blockrewards_{total} \times \dfrac{1}{txcount_{total}}) \times price_{date}}{\sum_{d}^{history} (fees_{d} + blockrewards_{d}) \times price_{d}}$ ###Code all_fees_block_price = \ sum([(history.tx_fees[date] + history.block_rewards[date]) * history.price[date] for date in history.dates]) def model_historical_fees_block_price(tx): tx_price = history.price[get_tx_date(tx)] block_rewards = history.block_rewards_total block_rewards_portion = block_rewards / history.tx_count_total tx_fees_and_block_price = (get_tx_fees(tx) + block_rewards_portion) * tx_price fees_and_block_price_ratio = tx_fees_and_block_price / all_fees_block_price return history.hashcount_total * fees_and_block_price_ratio ###Output _____no_output_____ ###Markdown HistoricalFeesAndBlockTimesGasAndPrice modelThis weights HistoricalFeesAndBlockTimesPrice by the gas used instead of treating each transaction equally.$hashcount_{total}\times\dfrac{(fees_{tx} + blockrewards_{total} \times \dfrac{gas_{tx}}{gas_{total}}) \times price_{date}}{\sum_{d}^{history} (fees_{d} + blockrewards_{d}) \times price_{d}}$ ###Code def model_historical_fees_block_gas_price(tx): tx_price = history.price[get_tx_date(tx)] block_rewards = history.block_rewards_total tx_gas_ratio = get_tx_gas(tx) / history.gas_used_total block_rewards_portion = block_rewards * tx_gas_ratio tx_fees_and_block_price = (get_tx_fees(tx) + block_rewards_portion) * tx_price fees_and_block_price_ratio = tx_fees_and_block_price / all_fees_block_price return history.hashcount_total * fees_and_block_price_ratio ###Output _____no_output_____ ###Markdown The last two models are harder to implement because they require close analysis of the Ethereum blockchain. ExtraPrice modelMiners prefer transactions that pay higher gas prices. To the extent that a transaction is higher than the minimum gas price that might have been paid, we may consider that transaction to be disproportionately incentivizing miners, and therefore disproportionately responsible for energy use. Something like:```pythondef model_extra_fee(tx): block_tx = get_transactions(get_block(tx)) gas_prices = [get_gas_price(e) for e in block_tx] min_gas_price = min(gas_prices) tx_weight = get_gas_price(tx) - min_gas_price all_weights = sum([(get_gas_price(e) - min_gas_price) for e in block_tx]) block_hashcount = history.hashcount[date] / history.block_count[date] return block_hashcount * tx_weight / all_weights``` ExtraFee modelInstead of looking at the increase over minimum gas price alone, we can also consider that a large gas usage with a larger gas price pays more to miners than a low gas usage with the same gas price. This could be considered the "extra" beyond what the miner would have been "guaranteed" to get paid otherwise. Something like:```pythondef model_extra_fee(tx): block_tx = get_transactions(get_block(tx)) gas_prices = [get_gas_price(e) for e in block_tx] min_gas_price = min(gas_prices) tx_extra = get_tx_fees(tx) - min_gas_price * get_tx_gas(tx) all_extra = sum([(get_tx_fees(e) - min_gas_price * get_tx_gas(e)) for e in block_tx]) block_hashcount = history.hashcount[date] / history.block_count[date] return block_hashcount * tx_extra / all_extra```Both of these techniques have two related problems:1. Any transaction that has the lowest gas price will be held unaccountable.2. If all transactions have the same gas price, the result is undefined. Test on one contractLet's test all the methods on a single contract. First we load the transactions then compute the different model results. Results are in exahashes (EH). For context, an average day on Ethereum at the beginning of 2021-04 was 42EH/day. ###Code methods = [ model_tx_count, model_gas, model_fees_only, model_historical_fees, model_historical_fees_price, model_fees_block, model_fees_block_gas, model_historical_fees_block, model_historical_fees_block_gas, model_historical_fees_block_price, model_historical_fees_block_gas_price ] contracts = load_contracts() etherscan = Etherscan() address = contracts['Foundation/ERC-721'] transactions = etherscan.load_transactions(address) for method in methods: energy = sum(map(method, transactions)) print(f'{energy:4.1f}EH {method.__name__}') ###Output 2.2EH model_tx_count 4.3EH model_gas 3.7EH model_fees_only 8.5EH model_historical_fees 18.0EH model_historical_fees_price 2.9EH model_fees_block 4.1EH model_fees_block_gas 2.1EH model_historical_fees_block 3.6EH model_historical_fees_block_gas 13.9EH model_historical_fees_block_price 23.8EH model_historical_fees_block_gas_price ###Markdown Run on all contractsFinally we run all methods on all contracts. ###Code cols = ('Name', 'Kind', *[e.__name__ for e in methods]) rows = [] for name_kind, address in contracts.items(): if name_kind == 'Nifty Gateway/multiple': addresses = list_nifty_gateway(update=False) transactions = etherscan.load_transactions_multiple(addresses) else: transactions = etherscan.load_transactions(address) row = name_kind.split('/') for method in methods: energy = sum(map(method, transactions)) row.append(energy) rows.append(row) df = pd.DataFrame(rows, columns=cols) df ###Output _____no_output_____ ###Markdown Computing Per-Transaction Responsibility for EthereumEthereum miners use energy that causes emissions. The amount of emissions varies with respect to energy sources, and the about of energy varies mainly as a function of time with respect to GPU efficiency. To simplify this discussion, we will work with hashrate (hashes per second) and hashcount (hashes) instead of energy or emissions.We assume:* It is more meaningful to allocate hashcount responsibility across all miners, or all transactions, than across both simultaneously. Here we allocate based on transactions, because the goal is to create accountability at the point of use.* The hashrate does not vary directly based on the properties of the transactions. This means allocating a hashcount to each transaction is a philosophical question rather than a technical question.* The sum of allocated usage should add up to the total usage. This means the sum of transaction hashcounts should add up to the total Ethereum hashcount. Or that the sum of all allocated emissions should add up to the total emissions.* Transactions contribute to the hashrate in proportion to how much the transaction incentivizes miners.Miners are incentivized by payments in Ethereum, and by speculation about the future value of Ethereum. Miners are paid in two ways:1. Varying transaction fees paid by users when making a transaction, recently around 2ETH total.2. A fixed per-block reward at 2ETH (which was previously 3ETH and 5ETH), with additional nuances for uncle blocks.There are a few big questions to keep in mind:- Should responsibility be allocated based on first order effects (like payment of transaction fees) or second order effects (like increasing the hashrate)? If the hashrate of Ethereum was fixed, how could we allocate responsibility?- Should responsibility be assigned as a portion of a block's energy usage (assuming each block is the same throughout a day), a portion of one day, or as a portion of a longer timeline like the entire history of Ethereum? Should a transaction years ago also be considered partially responsible for energy usage today?- Should the block rewards be ignored, or should they factor in to the model evenly across all transactions, based on gas usage, or something else?- When comparing over longer timelines, how can we account for the fact that a higher price for Ethereum provides a higher incentive to miners?The following models explore the effects of different answers to these questions. PreparationFirst we download some daily statistics about Ethereum, and load them into dictionaries. ###Code !wget --quiet -O "data/statistics.csv" "https://docs.google.com/spreadsheets/d/e/2PACX-1vRigeH5xVkOZNOtBM6blHfrPXNqyWrtS7fVzOUm_d9fbNtTyTCBwAnbBPdQ76q6-2p0xoNcy7VxhXnZ/pub?output=csv" import pandas as pd class EthereumHistory: def __init__(self, fn='data/statistics.csv'): df = pd.read_csv(fn) dates = [e.date() for e in pd.to_datetime(df['Date'])] self.dates = dates self.tx_count = dict(zip(dates, df['Tx count'])) self.tx_fees = dict(zip(dates, df['Tx fees (ETH)'])) self.block_count = dict(zip(dates, df['Block count'])) self.block_rewards = dict(zip(dates, df['Block rewards (ETH)'])) self.gas_used = dict(zip(dates, df['Gas used'])) self.price = dict(zip(dates, df['Price (USD)'])) self.hashcount = dict(zip(dates, df['Hashcount (EH/day)'])) self.tx_count_total = sum(self.tx_count.values()) self.tx_fees_total = sum(self.tx_fees.values()) self.block_count_total = sum(self.block_count.values()) self.block_rewards_total = sum(self.block_rewards.values()) self.gas_used_total = sum(self.gas_used.values()) self.price_total = sum(self.price.values()) self.hashcount_total = sum(self.hashcount.values()) history = EthereumHistory() ###Output _____no_output_____ ###Markdown Some helper functions for simplifying the code behind different models. ###Code from etherscan import Etherscan, etherscan_gas_fees, etherscan_gas_used, etherscan_timestamp from nifty_gateway import list_nifty_gateway from utils import load_contracts def get_tx_date(tx): return etherscan_timestamp(tx).date() def get_tx_fees(tx): return etherscan_gas_fees(tx) / 1e18 # convert wei to eth def get_tx_gas(tx): return etherscan_gas_used(tx) ###Output _____no_output_____ ###Markdown Simple modelsInitial discussion around allocating energy and emissions was focused on [per-transaction allocation](https://digiconomist.net/ethereum-energy-consumption):$hashcount_{date} \times \dfrac{1}{txcount_{date}}$Or [gas-weighted allocation](https://memoakten.medium.com/the-unreasonable-ecological-cost-of-cryptoart-2221d3eb2053):$hashcount_{date} \times \dfrac{gas_{tx}}{gas_{date}}$We repeat that here. ###Code def model_tx_count(tx): date = get_tx_date(tx) return history.hashcount[date] / history.tx_count[date] def model_gas(tx): date = get_tx_date(tx) return history.hashcount[date] * get_tx_gas(tx) / history.gas_used[date] ###Output _____no_output_____ ###Markdown FeesOnly modelThis is the simplest interpretation of the idea that "users should be held accountable for a service proportional to how much they pay for that service", because the payment is what rewards and therefore incentivizes the service to continue running. This is similar to the gas-based model, adding gas price to the mix.$hashcount_{date}\times\dfrac{fees_{tx}}{fees_{date}}$ ###Code def model_fees_only(tx): tx_date = get_tx_date(tx) fee_ratio = get_tx_fees(tx) / history.tx_fees[tx_date] return history.hashcount[tx_date] * fee_ratio ###Output _____no_output_____ ###Markdown HistoricalFees modelThe approach of the FeesOnly model looks at each transaction in the context of the day on which it was made. This time envelope could be narrowed or expanded. It could be narrowed to the block on which it was made (which should be similar to FeesOnly with some added noise). Or it could be expanded to include the entire history of Ethereum. Expanding would account for the fact that Ethereum transactions do not happen in a vacuum, but they build on all previous transactions, and lay a foundation for future transactions.One challenge of this model: until Ethereum docks with Ethereum 2, these numbers will changes over time as the `history.hashcount_total` and `history.fees_total` changes.$hashcount_{total}\times\dfrac{fees_{tx}}{fees_{total}}$ ###Code def model_historical_fees(tx): fee_ratio = get_tx_fees(tx) / history.tx_fees_total return history.hashcount_total * fee_ratio ###Output _____no_output_____ ###Markdown HistoricalFeesTimesPrice modelBecause the price of Ethereum varies from day to day, a transaction fee on a high-priced day pays more to miners in fiat equivalent than the same transaction fee on a low-priced day.$hashcount_{total}\times\dfrac{fees_{tx} \times price_{date}}{\sum_{d}^{history} fees_{d} \times price_{d}}$ ###Code # compute the fee volume across all dates all_fees_price = sum([history.tx_fees[date] * history.price[date] for date in history.dates]) def model_historical_fees_price(tx): tx_date = get_tx_date(tx) tx_fees_price = get_tx_fees(tx) * history.price[tx_date] fees_price_ratio = tx_fees_price / all_fees_price return history.hashcount_total * fees_price_ratio ###Output _____no_output_____ ###Markdown FeesAndBlock modelIt could be argued that miners are only half-incentivized by transaction fees, because transaction fees only make up about half of their income, with the other half coming from block rewards.Consider two blocks with two transactions each:1. Block 1 has transaction A paying 200 gwei and transaction B paying 100 gwei2. Block 2 has transaction A paying 2 ETH and transaction B paying 1 ETHAlthough the ratio between transactions A and B are similar in each block, the miner is arguably disproportionately more incentivized by transaction A over transaction B in block 2 compared to block 1. If the block reward is 2 ETH, and we assume that every transaction is equally responsible for the block reward, then we could argue that the weighting for the transactions in each block is as follows:* block 1 tx A = (200 gwei + 2 ETH / 2) / (200 gwei + 100 gwei + 2 ETH) ~= 1/2* block 1 tx B = (100 gwei + 2 ETH / 2) / (200 gwei + 100 gwei + 2 ETH) ~= 1/2* block 2 tx A = (2 ETH + 2 ETH / 2) / (2 ETH + 1 ETH + 2 ETH) = 4/6* block 2 tx B = (1 ETH + 2 ETH / 2) / (2 ETH + 1 ETH + 2 ETH) = 2/6Like other models, this model operates on a per-day basis, so the block rewards include uncle block rewards.$hashcount_{date}\times\dfrac{fees_{tx} + blockrewards_{date} \times \dfrac{1}{txcount_{date}}}{fees_{date} + blockrewards_{date}}$ ###Code def model_fees_block(tx): tx_date = get_tx_date(tx) block_rewards = history.block_rewards[tx_date] block_rewards_portion = block_rewards / history.tx_count[tx_date] tx_fees_and_block = get_tx_fees(tx) + block_rewards_portion all_fees_and_block = history.tx_fees[tx_date] + block_rewards fees_and_block_ratio = tx_fees_and_block / all_fees_and_block return history.hashcount[tx_date] * fees_and_block_ratio ###Output _____no_output_____ ###Markdown FeesAndBlockTimesGas modelInstead of assuming every transaction is equally responsible for the block reward, we can use gas as a weighting factor. This makes sense because high-gas transactions effectively "push out" low-gas transactions. If one transaction used 12,000,000 gas, then it uses part of the block that could be otherwised used for 120x transactions that use 100,000 gas.$hashcount_{date}\times\dfrac{fees_{tx} + blockrewards_{date} \times \dfrac{gas_{tx}}{gas_{date}}}{fees_{date} + blockrewards_{date}}$ ###Code def model_fees_block_gas(tx): tx_date = get_tx_date(tx) block_rewards = history.block_rewards[tx_date] tx_gas_ratio = get_tx_gas(tx) / history.gas_used[tx_date] block_rewards_portion = block_rewards * tx_gas_ratio tx_fees_and_block = get_tx_fees(tx) + block_rewards_portion all_fees_and_block = history.tx_fees[tx_date] + block_rewards fees_and_block_ratio = tx_fees_and_block / all_fees_and_block return history.hashcount[tx_date] * fees_and_block_ratio ###Output _____no_output_____ ###Markdown HistoricalFeesAndBlock modelThis opens the "time envelope" of the FeesAndBlock approach to cover all of Ethereum's history.$hashcount_{total}\times\dfrac{fees_{tx} + blockrewards_{total} \times \dfrac{1}{txcount_{total}}}{fees_{total} + blockrewards_{total}}$ ###Code def model_historical_fees_block(tx): block_rewards = history.block_rewards_total block_rewards_portion = block_rewards / history.tx_count_total tx_fees_and_block = get_tx_fees(tx) + block_rewards_portion all_fees_and_block = history.tx_fees_total + block_rewards fees_and_block_ratio = tx_fees_and_block / all_fees_and_block return history.hashcount_total * fees_and_block_ratio ###Output _____no_output_____ ###Markdown HistoricalFeesAndBlockTimesGas modelThis adds gas weighting to the HistoricalFeesAndBlock model.$hashcount_{total}\times\dfrac{fees_{tx} + blockrewards_{total} \times \dfrac{gas_{tx}}{gas_{total}}}{fees_{total} + blockrewards_{total}}$ ###Code def model_historical_fees_block_gas(tx): block_rewards = history.block_rewards_total tx_gas_ratio = get_tx_gas(tx) / history.gas_used_total block_rewards_portion = block_rewards * tx_gas_ratio tx_fees_and_block = get_tx_fees(tx) + block_rewards_portion all_fees_and_block = history.tx_fees_total + block_rewards fees_and_block_ratio = tx_fees_and_block / all_fees_and_block return history.hashcount_total * fees_and_block_ratio ###Output _____no_output_____ ###Markdown HistoricalFeesAndBlockTimesPrice modelThis weights HistoricalFeesAndBlock by the price of Ethereum.$hashcount_{total}\times\dfrac{(fees_{tx} + blockrewards_{total} \times \dfrac{1}{txcount_{total}}) \times price_{date}}{\sum_{d}^{history} (fees_{d} + blockrewards_{d}) \times price_{d}}$ ###Code all_fees_block_price = \ sum([(history.tx_fees[date] + history.block_rewards[date]) * history.price[date] for date in history.dates]) def model_historical_fees_block_price(tx): tx_price = history.price[get_tx_date(tx)] block_rewards = history.block_rewards_total block_rewards_portion = block_rewards / history.tx_count_total tx_fees_and_block_price = (get_tx_fees(tx) + block_rewards_portion) * tx_price fees_and_block_price_ratio = tx_fees_and_block_price / all_fees_block_price return history.hashcount_total * fees_and_block_price_ratio ###Output _____no_output_____ ###Markdown HistoricalFeesAndBlockTimesGasAndPrice modelThis weights HistoricalFeesAndBlockTimesPrice by the gas used instead of treating each transaction equally.$hashcount_{total}\times\dfrac{(fees_{tx} + blockrewards_{total} \times \dfrac{gas_{tx}}{gas_{total}}) \times price_{date}}{\sum_{d}^{history} (fees_{d} + blockrewards_{d}) \times price_{d}}$ ###Code def model_historical_fees_block_gas_price(tx): tx_price = history.price[get_tx_date(tx)] block_rewards = history.block_rewards_total tx_gas_ratio = get_tx_gas(tx) / history.gas_used_total block_rewards_portion = block_rewards * tx_gas_ratio tx_fees_and_block_price = (get_tx_fees(tx) + block_rewards_portion) * tx_price fees_and_block_price_ratio = tx_fees_and_block_price / all_fees_block_price return history.hashcount_total * fees_and_block_price_ratio ###Output _____no_output_____ ###Markdown The last two models are harder to implement because they require close analysis of the Ethereum blockchain. ExtraPrice modelMiners prefer transactions that pay higher gas prices. To the extent that a transaction is higher than the minimum gas price that might have been paid, we may consider that transaction to be disproportionately incentivizing miners, and therefore disproportionately responsible for energy use. Something like:```pythondef model_extra_fee(tx): block_tx = get_transactions(get_block(tx)) gas_prices = [get_gas_price(e) for e in block_tx] min_gas_price = min(gas_prices) tx_weight = get_gas_price(tx) - min_gas_price all_weights = sum([(get_gas_price(e) - min_gas_price) for e in block_tx]) block_hashcount = history.hashcount[date] / history.block_count[date] return block_hashcount * tx_weight / all_weights``` ExtraFee modelInstead of looking at the increase over minimum gas price alone, we can also consider that a large gas usage with a larger gas price pays more to miners than a low gas usage with the same gas price. This could be considered the "extra" beyond what the miner would have been "guaranteed" to get paid otherwise. Something like:```pythondef model_extra_fee(tx): block_tx = get_transactions(get_block(tx)) gas_prices = [get_gas_price(e) for e in block_tx] min_gas_price = min(gas_prices) tx_extra = get_tx_fees(tx) - min_gas_price * get_tx_gas(tx) all_extra = sum([(get_tx_fees(e) - min_gas_price * get_tx_gas(e)) for e in block_tx]) block_hashcount = history.hashcount[date] / history.block_count[date] return block_hashcount * tx_extra / all_extra```Both of these techniques have two related problems:1. Any transaction that has the lowest gas price will be held unaccountable.2. If all transactions have the same gas price, the result is undefined. Test on one contractLet's test all the methods on a single contract. First we load the transactions then compute the different model results. Results are in exahashes (EH). For context, an average day on Ethereum at the beginning of 2021-04 was 42EH/day. ###Code methods = [ model_tx_count, model_gas, model_fees_only, model_historical_fees, model_historical_fees_price, model_fees_block, model_fees_block_gas, model_historical_fees_block, model_historical_fees_block_gas, model_historical_fees_block_price, model_historical_fees_block_gas_price ] contracts = load_contracts() etherscan = Etherscan() address = contracts['Foundation/ERC-721'] transactions = etherscan.load_transactions(address) for method in methods: energy = sum(map(method, transactions)) print(f'{energy:4.1f}EH {method.__name__}') ###Output 2.2EH model_tx_count 4.3EH model_gas 3.7EH model_fees_only 8.5EH model_historical_fees 18.0EH model_historical_fees_price 2.9EH model_fees_block 4.1EH model_fees_block_gas 2.1EH model_historical_fees_block 3.6EH model_historical_fees_block_gas 13.9EH model_historical_fees_block_price 23.8EH model_historical_fees_block_gas_price ###Markdown Run on all contractsFinally we run all methods on all contracts. ###Code cols = ('Name', 'Kind', *[e.__name__ for e in methods]) rows = [] for name_kind, address in contracts.items(): if name_kind == 'Nifty Gateway/multiple': addresses = list_nifty_gateway(update=False) transactions = etherscan.load_transactions_multiple(addresses) else: transactions = etherscan.load_transactions(address) row = name_kind.split('/') for method in methods: energy = sum(map(method, transactions)) row.append(energy) rows.append(row) df = pd.DataFrame(rows, columns=cols) df ###Output _____no_output_____
Binary Complement and List Comprehension.ipynb
###Markdown Binary Complement~ is a bitwise inversion operator and it acts exectly as defined: - The bitwise inversion of x is defined as -(x+1). One can use this in oder to determine the opposite digonal of a matrix: ###Code matrix = [[1, 1, 0], [2, 2, 2], [0, 3, 3]] diagonal_1 = [matrix[i][i] for i in range(0, len(matrix))] diagonal_2 = [matrix[i][~i] for i in range(0, len(matrix))] print('Diagonal 1: {0} \nDiagonal 2: {1}'.format(diagonal_1, diagonal_2)) ###Output Diagonal 1: [1, 2, 3] Diagonal 2: [0, 2, 0]
classical-systems/CS12_Coin_Flip_Game_Solutions.ipynb
###Markdown $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Solutions for A Game with two biased coins _prepared by Abuzer Yakaryilmaz_ Task 2: Tracing ten biased coin tosses By using python, calculate the probabilities of Asja seeing heads and tails after 10 coin tosses.$GameCoins = \begin{array}{c|cc} \hookleftarrow & \mathbf{Head} & \mathbf{Tail} \\ \hline \mathbf{Head} & 0.6 & 0.3\\ \mathbf{Tail} & 0.4 & 0.7 \end{array} = \begin{array}{c|cc} \hookleftarrow & \mathbf{0} & \mathbf{1} \\ \hline \mathbf{0} & 0.6 & 0.3 \\ \mathbf{1} & 0.4 & 0.7 \end{array}$Use a loop in your solution. Solution ###Code # # We copy and paste the previous code # # initial condition: # Asja will start with one euro, # and so, we assume that the probability of having head is 1 at the beginning. prob_head = 1 prob_tail = 0 number_of_iterations = 10 for i in range(number_of_iterations): # the new probability of head is calculated by using the first row of table new_prob_head = prob_head * 0.6 + prob_tail * 0.3 # the new probability of tail is calculated by using the second row of table new_prob_tail = prob_head * 0.4 + prob_tail * 0.7 # update the probabilities prob_head = new_prob_head prob_tail = new_prob_tail # print prob_head and prob_tail print("the probability of getting head after",number_of_iterations,"coin tosses is",prob_head) print("the probability of getting tail after",number_of_iterations,"coin tosses is",prob_tail) ###Output the probability of getting head after 10 coin tosses is 0.42857480279999977 the probability of getting tail after 10 coin tosses is 0.5714251971999997 ###Markdown Task 3 Repeat Task 2 for 20, 30, and 50 coin tosses. Solution ###Code # define iterations as a list iterations = [20,30,50] for iteration in iterations: # initial probabilites prob_head = 1 prob_tail = 0 print("the number of iterations is",iteration) for i in range(iteration): # the new probability of head is calculated by using the first row of table new_prob_head = prob_head * 0.6 + prob_tail * 0.3 # the new probability of tail is calculated by using the second row of table new_prob_tail = prob_head * 0.4 + prob_tail * 0.7 # update the probabilities prob_head = new_prob_head prob_tail = new_prob_tail # print prob_head and prob_tail print("the probability of getting head after",iteration,"coin tosses is",prob_head) print("the probability of getting tail after",iteration,"coin tosses is",prob_tail) print() ###Output the number of iterations is 20 the probability of getting head after 20 coin tosses is 0.42857142859135267 the probability of getting tail after 20 coin tosses is 0.5714285714086464 the number of iterations is 30 the probability of getting head after 30 coin tosses is 0.42857142857142816 the probability of getting tail after 30 coin tosses is 0.5714285714285705 the number of iterations is 50 the probability of getting head after 50 coin tosses is 0.42857142857142805 the probability of getting tail after 50 coin tosses is 0.5714285714285706 ###Markdown Task 4 Repeat Task 2 for 10, 20, and 50 coin tosses by picking different initial conditions, e.g., prob_head = prob_tail = 1/2or prob_head = 0 prob_tail = 1 Solution ###Code # define iterations as a list iterations = [20,30,50] # define initial probability pairs as a double list initial_probabilities =[ [1/2,1/2], [0,1] ] for initial_probability_pair in initial_probabilities: print("probability of head is",initial_probability_pair[0]) print("probability of tail is",initial_probability_pair[1]) print() for iteration in iterations: # initial probabilites [prob_head,prob_tail] = initial_probability_pair print("the number of iterations is",iteration) for i in range(iteration): # the new probability of head is calculated by using the first row of table new_prob_head = prob_head * 0.6 + prob_tail * 0.3 # the new probability of tail is calculated by using the second row of table new_prob_tail = prob_head * 0.4 + prob_tail * 0.7 # update the probabilities prob_head = new_prob_head prob_tail = new_prob_tail # print prob_head and prob_tail print("the probability of getting head after",iteration,"coin tosses is",prob_head) print("the probability of getting tail after",iteration,"coin tosses is",prob_tail) print() print() ###Output probability of head is 0.5 probability of tail is 0.5 the number of iterations is 20 the probability of getting head after 20 coin tosses is 0.42857142857391883 the probability of getting tail after 20 coin tosses is 0.5714285714260805 the number of iterations is 30 the probability of getting head after 30 coin tosses is 0.42857142857142827 the probability of getting tail after 30 coin tosses is 0.571428571428571 the number of iterations is 50 the probability of getting head after 50 coin tosses is 0.42857142857142827 the probability of getting tail after 50 coin tosses is 0.571428571428571 probability of head is 0 probability of tail is 1 the number of iterations is 20 the probability of getting head after 20 coin tosses is 0.4285714285564849 the probability of getting tail after 20 coin tosses is 0.5714285714435143 the number of iterations is 30 the probability of getting head after 30 coin tosses is 0.42857142857142794 the probability of getting tail after 30 coin tosses is 0.5714285714285708 the number of iterations is 50 the probability of getting head after 50 coin tosses is 0.42857142857142794 the probability of getting tail after 50 coin tosses is 0.5714285714285706 ###Markdown $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Solutions for A Game with two biased coins _prepared by Abuzer Yakaryilmaz_ Task 2: Tracing ten biased coin tosses By using python, calculate the probabilities of Asja seeing heads and tails after 10 coin tosses.$GameCoins = \begin{array}{c|cc} \hookleftarrow & \mathbf{Head} & \mathbf{Tail} \\ \hline \mathbf{Head} & 0.6 & 0.3\\ \mathbf{Tail} & 0.4 & 0.7 \end{array} = \begin{array}{c|cc} \hookleftarrow & \mathbf{0} & \mathbf{1} \\ \hline \mathbf{0} & 0.6 & 0.3 \\ \mathbf{1} & 0.4 & 0.7 \end{array}$Use a loop in your solution. Solution ###Code # # We copy and paste the previous code # # initial condition: # Asja will start with one euro, # and so, we assume that the probability of having head is 1 at the beginning. prob_head = 1 prob_tail = 0 number_of_iterations = 10 for i in range(number_of_iterations): # the new probability of head is calculated by using the first row of table new_prob_head = prob_head * 0.6 + prob_tail * 0.3 # the new probability of tail is calculated by using the second row of table new_prob_tail = prob_head * 0.4 + prob_tail * 0.7 # update the probabilities prob_head = new_prob_head prob_tail = new_prob_tail # print prob_head and prob_tail print("the probability of getting head after",number_of_iterations,"coin tosses is",prob_head) print("the probability of getting tail after",number_of_iterations,"coin tosses is",prob_tail) ###Output _____no_output_____ ###Markdown Task 3 Repeat Task 2 for 20, 30, and 50 coin tosses. Solution ###Code # define iterations as a list iterations = [20,30,50] for iteration in iterations: # initial probabilites prob_head = 1 prob_tail = 0 print("the number of iterations is",iteration) for i in range(iteration): # the new probability of head is calculated by using the first row of table new_prob_head = prob_head * 0.6 + prob_tail * 0.3 # the new probability of tail is calculated by using the second row of table new_prob_tail = prob_head * 0.4 + prob_tail * 0.7 # update the probabilities prob_head = new_prob_head prob_tail = new_prob_tail # print prob_head and prob_tail print("the probability of getting head after",iteration,"coin tosses is",prob_head) print("the probability of getting tail after",iteration,"coin tosses is",prob_tail) print() ###Output _____no_output_____ ###Markdown Task 4 Repeat Task 2 for 10, 20, and 50 coin tosses by picking different initial conditions, e.g., prob_head = prob_tail = 1/2or prob_head = 0 prob_tail = 1 Solution ###Code # define iterations as a list iterations = [20,30,50] # define initial probability pairs as a double list initial_probabilities =[ [1/2,1/2], [0,1] ] for initial_probability_pair in initial_probabilities: print("probability of head is",initial_probability_pair[0]) print("probability of tail is",initial_probability_pair[1]) print() for iteration in iterations: # initial probabilites [prob_head,prob_tail] = initial_probability_pair print("the number of iterations is",iteration) for i in range(iteration): # the new probability of head is calculated by using the first row of table new_prob_head = prob_head * 0.6 + prob_tail * 0.3 # the new probability of tail is calculated by using the second row of table new_prob_tail = prob_head * 0.4 + prob_tail * 0.7 # update the probabilities prob_head = new_prob_head prob_tail = new_prob_tail # print prob_head and prob_tail print("the probability of getting head after",iteration,"coin tosses is",prob_head) print("the probability of getting tail after",iteration,"coin tosses is",prob_tail) print() print() ###Output _____no_output_____ ###Markdown $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Solutions for A Game with two biased coins _prepared by Abuzer Yakaryilmaz_ Task 2: Tracing ten biased coin tosses By using python, calculate the probabilities of Asja seeing heads and tails after 10 coin tosses.$GameCoins = \begin{array}{c|cc} \hookleftarrow & \mathbf{Head} & \mathbf{Tail} \\ \hline \mathbf{Head} & 0.6 & 0.3\\ \mathbf{Tail} & 0.4 & 0.7 \end{array} = \begin{array}{c|cc} \hookleftarrow & \mathbf{0} & \mathbf{1} \\ \hline \mathbf{0} & 0.6 & 0.3 \\ \mathbf{1} & 0.4 & 0.7 \end{array}$Use a loop in your solution. Solution ###Code # # We copy and paste the previous code # # initial condition: # Asja will start with one euro, # and so, we assume that the probability of having head is 1 at the beginning. prob_head = 1 prob_tail = 0 number_of_iterations = 10 for i in range(number_of_iterations): # the new probability of head is calculated by using the first row of table new_prob_head = prob_head * 0.6 + prob_tail * 0.3 # the new probability of tail is calculated by using the second row of table new_prob_tail = prob_head * 0.4 + prob_tail * 0.7 # update the probabilities prob_head = new_prob_head prob_tail = new_prob_tail # print prob_head and prob_tail print("the probability of getting head after",number_of_iterations,"coin tosses is",prob_head) print("the probability of getting tail after",number_of_iterations,"coin tosses is",prob_tail) ###Output _____no_output_____ ###Markdown Task 3 Repeat Task 2 for 20, 30, and 50 coin tosses. Solution ###Code # define iterations as a list iterations = [20,30,50] for iteration in iterations: # initial probabilites prob_head = 1 prob_tail = 0 print("the number of iterations is",iteration) for i in range(iteration): # the new probability of head is calculated by using the first row of table new_prob_head = prob_head * 0.6 + prob_tail * 0.3 # the new probability of tail is calculated by using the second row of table new_prob_tail = prob_head * 0.4 + prob_tail * 0.7 # update the probabilities prob_head = new_prob_head prob_tail = new_prob_tail # print prob_head and prob_tail print("the probability of getting head after",iteration,"coin tosses is",prob_head) print("the probability of getting tail after",iteration,"coin tosses is",prob_tail) print() ###Output _____no_output_____ ###Markdown Task 4 Repeat Task 2 for 10, 20, and 50 coin tosses by picking different initial conditions, e.g., prob_head = prob_tail = 1/2or prob_head = 0 prob_tail = 1 Solution ###Code # define iterations as a list iterations = [20,30,50] # define initial probability pairs as a double list initial_probabilities =[ [1/2,1/2], [0,1] ] for initial_probability_pair in initial_probabilities: print("probability of head is",initial_probability_pair[0]) print("probability of tail is",initial_probability_pair[1]) print() for iteration in iterations: # initial probabilites [prob_head,prob_tail] = initial_probability_pair print("the number of iterations is",iteration) for i in range(iteration): # the new probability of head is calculated by using the first row of table new_prob_head = prob_head * 0.6 + prob_tail * 0.3 # the new probability of tail is calculated by using the second row of table new_prob_tail = prob_head * 0.4 + prob_tail * 0.7 # update the probabilities prob_head = new_prob_head prob_tail = new_prob_tail # print prob_head and prob_tail print("the probability of getting head after",iteration,"coin tosses is",prob_head) print("the probability of getting tail after",iteration,"coin tosses is",prob_tail) print() print() ###Output _____no_output_____
Intro_to_Convolutional_Neural_Networks_with_W&B.ipynb
###Markdown Welcome!In this tutorial we'll walk through a simple convolutional neural network to classify the images in the cifar10 dataset.We’ll also set up Weights & Biases to log models metrics, inspect performance and share findings about the best architecture for the network. In this example we're using Google Colab as a convenient hosted environment, but you can run your own training scripts from anywhere and visualize metrics with W&B's experiment tracking tool. Running This Notebook1. Click "Open in playground" to create a copy of this notebook for yourself.2. Save a copy in Google Drive for yourself.3. To enable a GPU, please click Edit > Notebook Settings. Change the "hardware accelerator" to GPU.4. Step through each section below, pressing play on the code blocks to run the cells.Results will be logged to a [shared W&B project page](https://app.wandb.ai/wandb/cnn-intro).We highly encourage you to fork this notebook, tweak the parameters, or try the model with your own dataset! Convolutional Neural Networks**[Slides](https://www.dropbox.com/s/hhb2ohd8wdpccnr/Bloomberg%20Class%202.pdf?dl=0)**![](https://paper-attachments.dropbox.com/s_92F7A2BE132D5E4492B0E3FF3430FFF0FB2390A4135C0D77582A2D21A2EF8567_1573598995521_Screenshot+2019-11-12+14.49.51.png)**Convolution Layer** The convolution layer is made up of a set of independent filters. Each filter slides over the image and creates feature maps that learn different aspects of an image.![](https://paper-attachments.dropbox.com/s_92F7A2BE132D5E4492B0E3FF3430FFF0FB2390A4135C0D77582A2D21A2EF8567_1573598824588_Screenshot+2019-11-12+14.46.40.png)**Convolutional Neural Network**A CNN uses convolutions to connected extract features from local regions of an input. Most CNNs contain a combination of convolutional, pooling and affine layers. CNNs offer fantastic performance on visual recognition tasks, where they have become the state of the art.![](https://paper-attachments.dropbox.com/s_92F7A2BE132D5E4492B0E3FF3430FFF0FB2390A4135C0D77582A2D21A2EF8567_1573598824566_Screenshot+2019-11-12+14.46.59.png)**Pooling** The pooling layer reduce the size of the image representation and so the number of parameters and computation in the network. Pooling usually involves taking either the maximum or average value across the pooled area.![](https://paper-attachments.dropbox.com/s_92F7A2BE132D5E4492B0E3FF3430FFF0FB2390A4135C0D77582A2D21A2EF8567_1573598953569_Screenshot+2019-11-12+14.49.10.png) Basic Convolutional Neural Network ###Code # Install wandb %pip install -qq wandb #import libraries import warnings warnings.filterwarnings('ignore') import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import pandas as pd import numpy as np import tensorflow as tf import wandb # Set Hyper-parameters wandb.init(entity="wandb", project="cnn-intro") config = wandb.config config.concept = 'cnn' config.batch_size = 128 config.epochs = 10 config.learn_rate = 0.001 config.dropout = 0.3 config.dense_layer_nodes = 128 %%wandb # Load data class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] num_classes = len(class_names) (X_train, y_train), (X_test, y_test) = tf.keras.datasets.cifar10.load_data() # normalize data X_train = X_train.astype('float32') / 255. X_test = X_test.astype('float32') / 255. # Convert class vectors to binary class matrices. y_train = tf.keras.utils.to_categorical(y_train, num_classes) y_test = tf.keras.utils.to_categorical(y_test, num_classes) # Define model model = tf.keras.models.Sequential() # Conv2D adds a convolution layer with 32 filters that generates 2 dimensional # feature maps to learn different aspects of our image model.add(tf.keras.layers.Conv2D(32, (3, 3), padding='same', input_shape=X_train.shape[1:], activation='relu')) # MaxPooling2D layer reduces the size of the image representation our # convolutional layers learnt, and in doing so it reduces the number of # parameters and computations the network needs to perform. model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2))) # Dropout layer turns off a percentage of neurons at every step model.add(tf.keras.layers.Dropout(config.dropout)) # Flattens our array so we can feed the convolution layer outputs (a matrix) # into our fully connected layer (an array) model.add(tf.keras.layers.Flatten()) # Dense layer creates dense, fully connected layers with x inputs and y outputs # - it simply outputs the dot product of our inputs and weights model.add(tf.keras.layers.Dense(config.dense_layer_nodes, activation='relu')) model.add(tf.keras.layers.Dropout(config.dropout)) model.add(tf.keras.layers.Dense(num_classes, activation='softmax')) # Compile the model and specify the optimizer and loss function model.compile(loss='categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(config.learn_rate), metrics=['accuracy']) # log the number of total parameters config.total_params = model.count_params() print("Total params: ", config.total_params) # Fit the model to the training data, specify the batch size # and the WandbCallback() to track model model.fit(X_train, y_train, epochs=10, batch_size=128, validation_data=(X_test, y_test), verbose=0, callbacks=[wandb.keras.WandbCallback(data_type="image", labels=class_names, save_model=False)]) ###Output _____no_output_____ ###Markdown Convolutional Neural Network with Data AugmentationData augmentation artificially expands the training dataset by creating slightly modified versions of images in the dataset - by scaling, shifting and rotating the images in the training set.![](https://paper-attachments.dropbox.com/s_92F7A2BE132D5E4492B0E3FF3430FFF0FB2390A4135C0D77582A2D21A2EF8567_1573599686286_Screenshot+2019-11-12+15.01.23.png) ###Code # examples/keras-cifar/cifar-gen.py from tensorflow.keras.callbacks import TensorBoard from tensorflow.keras.datasets import cifar10 from tensorflow.keras.preprocessing.image import ImageDataGenerator import numpy as np import os import wandb from wandb.keras import WandbCallback import tensorflow as tf # Define hyperparameters run = wandb.init(entity="wandb", project="cnn-intro") config = run.config config.concept = 'cnn-augmented' config.dropout = 0.25 config.dense_layer_nodes = 100 config.learn_rate = 0.08 config.batch_size = 128 config.epochs = 10 # Load data class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] num_classes = len(class_names) (X_train, y_train), (X_test, y_test) = cifar10.load_data() # Convert class vectors to binary class matrices. y_train = tf.keras.utils.to_categorical(y_train, num_classes) y_test = tf.keras.utils.to_categorical(y_test, num_classes) # Define the model (same as above) model = tf.keras.models.Sequential() model.add(tf.keras.layers.Conv2D(32, (3, 3), padding='same', input_shape=X_train.shape[1:], activation='relu')) model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2))) model.add(tf.keras.layers.Dropout(config.dropout)) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(config.dense_layer_nodes, activation='relu')) model.add(tf.keras.layers.Dropout(config.dropout)) model.add(tf.keras.layers.Dense(num_classes, activation='softmax')) # Compile the model model.compile(loss='categorical_crossentropy', optimizer="adam", metrics=['accuracy']) # log the number of total parameters config.total_params = model.count_params() print("Total params: ", config.total_params) # normalize data X_train = X_train.astype('float32') / 255. X_test = X_test.astype('float32') / 255. # Add data augmentation datagen = ImageDataGenerator( featurewise_center=False, # set input mean to 0 over the dataset samplewise_center=False, # set each sample mean to 0 featurewise_std_normalization=False, # divide inputs by std of the dataset samplewise_std_normalization=False, # divide each input by its std zca_whitening=False, # apply ZCA whitening rotation_range=15, # randomly rotate images in the range (degrees, 0 to 180) width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) height_shift_range=0.1, # randomly shift images vertically (fraction of total height) horizontal_flip=True, # randomly flip images vertical_flip=False) # randomly flip images datagen.fit(X_train) # Fit the model on the batches generated by datagen.flow() model.fit_generator(datagen.flow(X_train, y_train, batch_size=config.batch_size), steps_per_epoch=X_train.shape[0] // config.batch_size, epochs=config.epochs, verbose =0, validation_data=(X_test, y_test), callbacks=[WandbCallback(data_type="image", labels=class_names)]) ###Output _____no_output_____ ###Markdown Visualize Predictions Live Project Overview1. Check out the [project page](https://app.wandb.ai/wandb/cnn-intro) to see your results in the shared project. 1. Press 'option+space' to expand the runs table, comparing all the results from everyone who has tried this script. 1. Click on the name of a run to dive in deeper to that single run on its own run page.![](https://paper-attachments.dropbox.com/s_92F7A2BE132D5E4492B0E3FF3430FFF0FB2390A4135C0D77582A2D21A2EF8567_1574117851346_Screenshot+2019-11-18+14.57.27.png) Visualize PerformanceClick through to a single run to see more details about that run. For example, on [this run page](https://app.wandb.ai/wandb/cnn-intro/runs/6218o47i) you can see the performance metrics I logged when I ran this script.![](https://paper-attachments.dropbox.com/s_92F7A2BE132D5E4492B0E3FF3430FFF0FB2390A4135C0D77582A2D21A2EF8567_1574117639961_Screenshot+2019-11-18+14.53.27.png) Review CodeThe overview tab picks up a link to the code. In this case, it's a link to the Google Colab. If you're running a script from a git repo, we'll pick up the SHA of the latest git commit and give you a link to that version of the code in your own GitHub repo.![](https://paper-attachments.dropbox.com/s_92F7A2BE132D5E4492B0E3FF3430FFF0FB2390A4135C0D77582A2D21A2EF8567_1574117783115_overview.png) Visualize System MetricsThe System tab on the runs page lets you visualize how resource efficient your model was. It lets you monitor the GPU, memory, CPU, disk, and network usage in one spot.![](https://paper-attachments.dropbox.com/s_92F7A2BE132D5E4492B0E3FF3430FFF0FB2390A4135C0D77582A2D21A2EF8567_1574118253358_Screenshot+2019-11-18+15.04.10.png) Next StepsAs you can see running sweeps is super easy! We highly encourage you to fork this notebook, tweak the parameters, or try the model with your own dataset! More about Weights & BiasesWe're always free for academics and open source projects. Email [email protected] with any questions or feature suggestions. Here are some more resources:1. [Documentation](http://docs.wandb.com) - Python docs2. [Gallery](https://app.wandb.ai/gallery) - example reports in W&B3. [Articles](https://www.wandb.com/articles) - blog posts and tutorials4. [Community](bit.ly/wandb-forum) - join our Slack community forum ###Code ###Output _____no_output_____
CartopyRotatedGrid.ipynb
###Markdown Using `cartopy` for Plots with Rotated Lon/Lat Grids References* [`cartopy` Documentation](https://scitools.org.uk/cartopy/docs/latest/index.html)* [`cartopy` Code on GitHub](https://github.com/SciTools/cartopy) The ProblemOcean model grids are often rotated at arbitrary angles from North/Southso as to maximize the ratio of water to land in the model domain.Doing so makes the model computation more efficient because less time isspent visiting all-land grid cells.The SalishSeaCast NEMO grid is rotated approximately 29°so that the y-axis of the model grid is more or less aligned with the along-strait axisof the Strait of Georgia.A consequence of such model grid rotations is that plotting model fields on grid coordinates(that are familiar to people who run the model)make efficient use of the plot frame,but plotting fields on lon/lat coordinates(that provides geographical context for people less familiar with the model)can waste a lot of space in the plot frame: ###Code import cartopy.crs import cmocean.cm import matplotlib.pyplot as plt import numpy import xarray %matplotlib inline mesh_mask = xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSn2DMeshMaskV17-02") water_mask = mesh_mask.tmaskutil.isel(time=0) fields = xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV19-05") salinity = fields.salinity.sel(time="2020-08-14 14:30", depth=0, method="nearest").where(water_mask) georef = xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSnBathymetryV17-02") # Use `subplot_kw` arg to pass a dict of kwargs containing the `facecolor` value # to the add_subplot() call(s) that subplots() will make. # That fills the axes with a land colour that the field will be plotted on top of. fig, axs = plt.subplots(1, 2, figsize=(18, 9), subplot_kw={"facecolor": "#8b7765"}) # grid coordinates plot salinity.plot(ax=axs[0], cmap=cmocean.cm.haline) # Set the plot aspect ratio to match the nominal 500m/440m y/x grid cell size axs[0].set_aspect(5/4.4, adjustable="box") # Show grid lines axs[0].grid() # lon/lat coordinates plot # We have to go deeper into matplotlib because the lon/lat coordinates are 2D arrays # and they can't be made into coordinates of the salinity DataArray quad_mesh = axs[1].pcolormesh(georef.longitude, georef.latitude, salinity, cmap=cmocean.cm.haline, shading="auto") # Colour bar cbar = plt.colorbar(quad_mesh, ax=axs[1]) cbar.set_label(f"{salinity.attrs['long_name']} [{salinity.attrs['units']}]") # Axes labels and title axs[1].set_xlabel(f"{georef.longitude.attrs['long_name']} [{georef.longitude.attrs['units']}]") axs[1].set_ylabel(f"{georef.latitude.attrs['long_name']} [{georef.latitude.attrs['units']}]") axs[1].set_title(f"depth={salinity.depth.item():0.7f}") # Don't call set_aspect() because plotting on lon/lat grid implicitly makes the aspect ratio correct # Show grid lines axs[1].grid() # Make the spacing of the sub-plots look nice fig.tight_layout() ###Output _____no_output_____ ###Markdown The SolutionPlot the model field on grid coordinates with the figure grid lines and axes labelshown in lon/lat coordinates.The [`matplotlib` Basemap Toolkit](https://matplotlib.org/basemap/) provided a way of doingthat:![Basemap rotated lon/lat grid plots](https://salishsea.eos.ubc.ca/nowcast-agrif/31jul20/baynes_sound_surface_31jul20.svg)but Basemap broke forever with the release of [`matplotlib=3.3.0`](https://matplotlib.org/3.3.1/api/prev_api_changes/api_changes_3.3.0.html?highlight=release%20notesremovals) on 17-Jul-2020 with the removal of the deprecated`matplotlib.cbook.dedent()` function that Basemap uses to format error messages(see [this Slack thread](https://salishseacast.slack.com/archives/C01319S2YJW/p1596157394007400) for more details).The Basemap developers recommend the [`cartopy` package](https://scitools.org.uk/cartopy/docs/latest/index.html)as an alternative.As of its [`cartopy=0.18`](https://scitools.org.uk/cartopy/docs/latest/whats_new.htmlwhat-s-new-in-cartopy-0-18)release on 3-May-2020,`cartopy` has [longitude and latitude labelling for all projections](https://github.com/SciTools/cartopy/pull/1117),so it can provide the solution that we need. We need 2 coordinate reference systems (CRSs) based on different[projections](https://scitools.org.uk/cartopy/docs/latest/crs/projections.html)to accomplish our goal. ###Code rotated_crs = cartopy.crs.RotatedPole(pole_longitude=120.0, pole_latitude=63.75) plain_crs = cartopy.crs.PlateCarree() ###Output _____no_output_____ ###Markdown The CRS based on `RotatedPole` provides the rotated lon/lat coordinate grid for the plot.> **Help Wanted**>> Choosing the values for `plot_longitude` and `pole_latitude` is a mystery to me.> I used trial and error until I found the values above that provide an approximately vertical> orientation for the along-strait axis of the Strait of Georgia.>> If you understand the `RotatedPole` projection,> or figure out how to choose values for `plot_longitude` and `pole_latitude` in a systematic way,> please add your knowledge here!This CRS is used as the projection when the plot axes is set up. The CRS based on `PlateCarree` is used to transform the model fieldbetween grid coordinates and lon/lat coordinates when it is plotted. ###Code # Use `subplot_kw` arg to pass a dict of kwargs containing the `RotatedPole` CRS # and the `facecolor` value to the add_subplot() call(s) that subplots() will make fig, ax = plt.subplots( 1, 1, figsize=(18, 9), subplot_kw={"projection": rotated_crs, "facecolor": "#8b7765"} ) # Use the `transform` arg to tell cartopy to transform the model field # between grid coordinates and lon/lat coordinates when it is plotted quad_mesh = ax.pcolormesh( georef.longitude, georef.latitude, salinity, transform=plain_crs, cmap=cmocean.cm.haline, shading="auto" ) # Colour bar cbar = plt.colorbar(quad_mesh, ax=ax) cbar.set_label(f"{salinity.attrs['long_name']} [{salinity.attrs['units']}]") # Axes title; ax.gridlines() below labels axes tick in a way that makes # additional axes labels unnecessary IMHO ax.set_title(f"depth={salinity.depth.item():0.7f}") # Don't call set_aspect() because plotting on lon/lat grid implicitly makes the aspect ratio correct # Show grid lines # Note that ax.grid() has no effect; ax.gridlines() is from cartopy, not matplotlib ax.gridlines(draw_labels=True, auto_inline=False) # cartopy doesn't seem to play nice with tight_layout() unless we call canvas.draw() first fig.canvas.draw() fig.tight_layout() ###Output _____no_output_____
TP3/TP3_SARAZIN.ipynb
###Markdown Loan SARAZIN & Anna MARIZY Simulations de Variables Aléatoires Générateur fondé sur l’inverse généralisée ###Code import time import numpy as np import numpy.linalg as la import matplotlib.pyplot as plt import scipy.stats as ss #from scipy.stats import uniform #from scipy.stats import expon def generalized_inverse(invF, n) : """Génère n échantillons selon la densité f = invF en utilisant l'inverse généralisé Args: invF (IDK): Inverse de la fonction de répartition n (int): Sample size wanted Returns: array: Array of n samples """ u = ss.uniform.rvs(loc=0, scale=1, size=n) return invF(u) ###Output _____no_output_____ ###Markdown Soit $X$ une variable aléatoire suivant une loi exponentielle de paramètre $\lambda$. Sa fonction de répartition est :$$F(x)=1-\exp(-\lambda x)$$L'inverse généralisé de $F$ est :$$F^{-1}(u)=-\frac{1}{\lambda}\ln(1-u)$$ ###Code def invExp(u, Lambda) : return -np.log(1-u)/Lambda Lambda = 2 nbEchantillon = 10000 x = generalized_inverse(lambda x : invExp(x, Lambda), nbEchantillon) t = np.linspace(0, 5, 1000) z = ss.expon.pdf(t, loc=0, scale=1/Lambda) plt.figure(figsize=(10, 8)) plt.hist(x, bins=100, density=True, label='Echantillons par inverse généralisé') plt.plot(t, z, label='Probability density function') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Pour utiliser cet algorithme, il faut connaitre l'inverse généralisé de la fonction de répartition $F$, afin de généreréchantillon selon la densité $f(x)=F'(x)$. En pratique il n'est pas souvent possible d'accéder à cet inverse généralisé. Générateur Accept-Reject ###Code def acceptReject(f, g, M, n) : """ Génère un échantillon de variables aléatoires selon la densité f = invF en utilisant l'inverse généralisé Args: f (function) : g (funtion) : M (float) : n (int): Sample size wanted Returns: array: Array of n samples """ X = [] k = 0 for i in range(n) : u = ss.uniform.rvs() x = g.rvs() while (f.pdf(x)/(M*g.pdf(x))) < u : u = ss.uniform.rvs(loc=0, scale=1) x = g.rvs() k += 1 X.append(x) print(f"{k} échantillons ont été rejetés pour échantilloner {n} échantillons") return(np.array(X)) M = 5 X100 = acceptReject(ss.norm(loc=0, scale=1), ss.cauchy(loc=0, scale=1), M, 100) X1000 = acceptReject(ss.norm(loc=0, scale=1), ss.cauchy(loc=0, scale=1), M, 1000) t = np.linspace(-3, 3, 1000) z = ss.norm.pdf(t, loc=0, scale=1) fig = plt.figure(figsize=(15, 8)) ax1 = fig.add_subplot(121) ax1.hist(X100, bins=50, color="skyblue", density=True, label='Echantillons par Accept-Reject') ax1.plot(t, z, color='r', label='Probability density function') ax1.legend() ax1.set_title(f"100 échantillons, M = {M}") ax2 = fig.add_subplot(122) ax2.hist(X1000, bins=100, density=True, color="skyblue", label='Echantillons par Accept-Reject') ax2.plot(t, z, color='r', label='Probability density function') ax2.legend() ax2.set_title(f"1000 échantillons, M = {M}") plt.show() M = np.sqrt(2*np.pi/np.e) X100 = acceptReject(ss.norm(loc=0, scale=1), ss.cauchy(loc=0, scale=1), M, 100) X1000 = acceptReject(ss.norm(loc=0, scale=1), ss.cauchy(loc=0, scale=1), M, 1000) X10000 = acceptReject(ss.norm(loc=0, scale=1), ss.cauchy(loc=0, scale=1), M, 10000) t = np.linspace(-5, 5, 1000) z = ss.norm.pdf(t, loc=0, scale=1) fig = plt.figure(figsize=(15, 15)) ax1 = fig.add_subplot(221) ax1.hist(X100, bins=50, color="skyblue", density=True, label='Echantillons par Accept-Reject') ax1.plot(t, z, color='r', label='Probability density function') ax1.legend() ax1.set_title(f"100 échantillons, M = {M:.2f}") ax2 = fig.add_subplot(222) ax2.hist(X1000, bins=100, density=True, color="skyblue", label='Echantillons par Accept-Reject') ax2.plot(t, z, color='r', label='Probability density function') ax2.legend() ax2.set_title(f"1000 échantillons, M = {M:.2f}") ax3 = fig.add_subplot(223) ax3.hist(X10000, bins=100, density=True, color="skyblue", label='Echantillons par Accept-Reject') ax3.plot(t, z, color='r', label='Probability density function') ax3.legend() ax3.set_title(f"10000 échantillons, M = {M:.2f}") t = t + 5 z = ss.norm.pdf(t, loc=5, scale=1) ax4 = fig.add_subplot(224) ax4.hist(X10000 + 5, bins=100, color="skyblue", density=True, label='Echantillons par Accept-Reject') ax4.plot(t, z, color='r', label='Probability density function') ax4.legend() ax4.set_title(f"loi normale de moyenne 5 à partir des \néchantillons précédents") plt.show() ###Output _____no_output_____ ###Markdown La méthode d'accept-reject permet bein d'échantilloner $X $suivant la densité $f(x)$, sachant que l'on connaîtune densité g(x) à partir de laquelle on sait échantillonner et qui est telle que :$$f(x)\leq Mg(x)$$Cependant, si $M$ n'est pas bien choisi, l'algorithme n'est pas optimal puisque $x$ est accepté avec une probabilité $\frac{1}{M}$. On voit de plus avec les histogrammes tracés ci-dessus qu'un nombre élevé d'échantillon est nécessaire pour que l'ensemble des échantillons approchent la densité de probabilité souhaitée.A partir d'un nombre conséquent d'échantillons suivant une loi normale centrée réduite, obtenus aec la méthode d'accept-reject, il est facile de calculer 10000 échantillons distribués suivant une loi normale de moyenne 5. Méthode de Box-Muller pour des lois normales univariées ###Code def boxMuller(n) : """Génère n échantillons à partir de la loi normale bivariée réduite Args: n (int): Sample size wanted Returns: array: Array of n samples """ Z = [] for i in range(n) : u1, u2 = ss.uniform.rvs(loc=0, scale=1, size=2) R, V = -2*np.log(u1), 2*np.pi*u2 z1, z2 = np.sqrt(R)*np.cos(V), np.sqrt(R)*np.sin(V) Z.append([z1, z2]) return(np.array(Z)) ###Output _____no_output_____ ###Markdown Lors de l'éxécution de la méthode de Box-Muller, on obtient deux échantillons qui suivent une loi normale univariée. ###Code nbEchantillon = 10000 Z = boxMuller(nbEchantillon//2) t = np.linspace(-5, 5, 1000) z = ss.norm.pdf(t, loc=0, scale=1) fig = plt.figure(figsize=(10, 6)) ax1 = fig.add_subplot(111) ax1.hist(np.ravel(Z), bins=50, color="skyblue", density=True, label='Echantillons par Box_Muller') ax1.plot(t, z, color='r', label='Probability density function') ax1.legend() ax1.set_title(f"Echantillonage de {nbEchantillon} échantillons") plt.show() ###Output _____no_output_____ ###Markdown Avec l'algorithme de Box-Muller, les échantillons obtenues coïcident avec la loi de probabilité cible. Cependant, cette méthode utilise des fonctions triginométriques squi sont coûteuses en temps de calcul pour un couple d'échantillons (cependant ce coût reste négligeable comparé à d'autres algorithmes comme accept-rejet par exemple). ###Code nbEchantillon = 5000 Z = boxMuller(nbEchantillon//2) t = np.linspace(-10, 4, 1000) z = ss.norm.pdf(t, loc=-3, scale=np.sqrt(3)) fig = plt.figure(figsize=(10, 6)) ax1 = fig.add_subplot(111) ax1.hist(np.ravel(Z*np.sqrt(3)-3), bins=50, color="skyblue", density=True, label='Echantillons par Box_Muller') ax1.plot(t, z, color='r', label='Probability density function') ax1.legend() ax1.set_title(f"loi normale de moyenne -3 et de variance 3\nà partir des échantillons précédents") plt.show() ###Output _____no_output_____ ###Markdown Générer des échantillons d’une loi normale multivariée ###Code def multivarie(n, mu, sigma) : Ztot = [] for i in range(n) : Z = np.reshape(boxMuller(mu.size//2), (mu.size, 1)) A = la.cholesky(sigma) Ztot.append(mu + A @ Z) return(np.array(Ztot)) nbEchantillon = 1000 mu = np.array([[0], [50], [100], [-50], [-100], [200]]) sigma = np.array([[11, 10, 5, 9, 4, 2], [10, 13, 9, 15, 5, 3], [5, 9, 15, 11, 3, 1], [9, 15, 11, 21, 6, 4], [4, 5, 3, 6, 5, 1], [2, 3, 1, 4, 1, 1]]) Z = multivarie(nbEchantillon, mu, sigma) fig = plt.figure(figsize=(15, 20)) for i in range(mu.size) : t = np.linspace(np.min(Z[:, i]) - sigma[i][i]/5, np.max(Z[:, i]) + sigma[i][i]/5, 1000) z = ss.norm.pdf(t, loc=mu[i][0], scale=np.sqrt(sigma[i][i])) ax = fig.add_subplot(3, 2, i + 1) ax.hist(np.ravel(Z[:, i]), bins=50, color="skyblue", density=True, label='Echantillons par loi multivariée') ax.plot(t, z, color='r', label='Probability density function') ax.legend() ax.set_title(f"loi normale de moyenne {mu[i][0]} et de variance {sigma[i][i]}") plt.show() ###Output _____no_output_____ ###Markdown A expliquer Echantillonner suivant une loi de Bernouilli ###Code def Binomial(n, p) : X = [] for i in range(n) : u = ss.uniform.rvs(loc=0, scale=1) if u < p : X.append(0) else : X.append(1) return(np.array(X)) nbEchantillon = 1000 p = 0.7 Z = Binomial(nbEchantillon, 1 - p) fig = plt.figure(figsize=(10, 6)) ax1 = fig.add_subplot(111) ax1.hist(Z, bins=2, color="skyblue", density=False) ax1.set_title(f"{nbEchantillon} échantillons suivant une loi de Bernouilli de paramètre p = {p}") plt.show() n0 = (Z.size - np.count_nonzero(Z))/Z.size n1 = (np.count_nonzero(Z))/Z.size print(f"La fréquence de 0 dans cette série d'échantillons est de {n0}") print(f"La fréquence de 1 dans cette série d'échantillons est de {n1}") ###Output _____no_output_____ ###Markdown La fréquence de 0 dans l'échantillon généré est proche du paramètre de la loi de Bernoulli cible. Il faut cependant penser à faire attention au seuil utilisé dans l'algorithme. En effet, on veut que $Prob(U>1-p)=p$ pour que $P(X=1)=p)$. Algorithme de Metropolis-Hastings Influence du support de la loi de proposition ###Code def MH_indep(f, q, nbEchantillon, nbBurn) : X = np.zeros(nbEchantillon) x = q.rvs() for t in range(nbEchantillon + nbBurn) : y = q.rvs() rho = min((f(y)*q.pdf(x))/(f(x)*q.pdf(y)), 1) u = ss.uniform.rvs(loc = 0, scale = 1) if u < rho : x = y if t >= nbBurn : X[t - nbBurn] = x return(X) nbEchantillon = 20000 nbBurn = 500 q = ss.uniform(loc=-1, scale=2) f = ss.norm(loc=0, scale=1).pdf X = MH_indep(f, q, nbEchantillon, nbBurn) t = np.linspace(-1.5, 1.5, 1000) z = f(t) fig = plt.figure(figsize=(10, 6)) ax1 = fig.add_subplot(111) ax1.hist(X, bins=50, color="skyblue", density=True, label='Echantillons') ax1.plot(t, z, color='r', label='Probability density function') ax1.set_title(f"Echantillonage de {nbEchantillon} échantillons par Metropolis-Hastings") ax1.legend() plt.show() ###Output _____no_output_____ ###Markdown Comme une variable aléatoire suivant une loi normale centrée réduite prend des valeurs sur $\mathbb{R}$, et que lorsqu'on utilise loi de proposition une loi uniforme $\mathcal{U}[-1;1]$ toutes les valeurs possibles sont sur $[-1;1]$. ###Code nbEchantillon = 20000 nbBurn1, nbBurn2 = 500, 1000 q1, q2 = ss.uniform(loc=-5, scale=10), ss.uniform(loc=-20, scale=40) f = ss.norm(loc=0, scale=1).pdf X1 = MH_indep(f, q1, nbEchantillon, nbBurn1) X2 = MH_indep(f, q2, nbEchantillon, nbBurn2) t = np.linspace(-4, 4, 1000) z = f(t) fig = plt.figure(figsize=(15, 6)) fig.suptitle(f"Echantillonage de {nbEchantillon} échantillons") ax1 = fig.add_subplot(121) ax1.hist(X1, bins=75, color="skyblue", density=True, label='Echantillons') ax1.plot(t, z, color='r', label='Probability density function') ax1.set_title(f"loi de proposition U[-5; 5], BurnIn = {nbBurn1}") ax1.legend() ax2 = fig.add_subplot(122) ax2.hist(X2, bins=75, color="skyblue", density=True, label='Echantillons') ax2.plot(t, z, color='r', label='Probability density function') ax2.set_title(f"loi de proposition U[-20; 20], BurnIn = {nbBurn2}") ax2.legend() plt.show() ###Output _____no_output_____ ###Markdown Si on augmente le nombre de BurnIn, on s'assure d'une meilleure convergence vers la loi cible. En revanche, si le nombre de BurnIn reste inchangé mais qu'on utilise une loi uniforme sur un intervalle plus grand, on voit que la 'convergence' est plus lente et que la simulation est moins précise. Avec une loi de proposition prenant ses valeurs sur un intervalle contenant la majorité des valeurs prises par la loi cible, alors la suite d'échantillons obtenue se rapproche d'une suite de variables aléatoires de densité la loi cible. Echantillonner suivant une loi possédant deux modes ###Code mu1, mu2 = 10, -5 a1, a2 = 1, 2 p = 0.3 def f(x) : return p*ss.laplace(loc=mu1, scale=a1).pdf(x) + (1-p)*ss.laplace(loc=mu2, scale=a2).pdf(x) t = np.linspace(-20, 20, 10000) fig = plt.figure(figsize=(10, 6)) ax1 = fig.add_subplot(111) ax1.plot(t, f(t)) ax1.set_title('Densité de probabilité cible f') plt.show() nbEchantillon = 10000 nbBurn = 100 q = ss.norm(loc=0, scale=10) t = time.time() X = MH_indep(f, q, nbEchantillon, nbBurn) print(time.time() - t, "s") t = np.linspace(-20, 20, 10000) fig = plt.figure(figsize=(10, 6)) ax1 = fig.add_subplot(111) ax1.plot(t, f(t)) ax1.hist(X, bins=75, color="skyblue", density=True, label='Echantillons par Box_Muller') ax1.set_title('Densité de probabilité cible f') ax1.legend() plt.show() ###Output _____no_output_____ ###Markdown Echantillonnage de Monte Carlo parfait ###Code def get_part(f, nbPart=100) : moy = 0 for _ in range(nbPart) : moy += f.rvs() return moy/nbPart def echant_monte_carlo(f, nbEchantillon=1000, nbPart=100) : t = time.time() Vm = [] for _ in range(nbEchantillon) : Vm.append(get_part(f, nbPart)) Vm = np.array(Vm) fig = plt.figure(figsize=(15, 8)) fig.suptitle(f'Moyennes de {nbEchantillon} échantillons de {nbPart} particules') ax1 = fig.add_subplot(121) ax1.plot(Vm, '.', label='Moyenne des échantillons') ax1.plot([0, nbEchantillon], [f.mean(), f.mean()], 'r-', label='Moyenne cible') ax1.legend() ax2 = fig.add_subplot(122) ax2.hist(Vm, bins=50, color="skyblue", density=True) plt.show() print(f"Temps d'exécution : {time.time() - t:.4f} s") print(f"La moyenne de tous les échantillons est de {np.mean(Vm):.3f}") return Vm, np.mean(Vm) f = ss.norm(loc=5, scale=2) Vm1, m = echant_monte_carlo(f, nbEchantillon=1000, nbPart=100) Vm2, m = echant_monte_carlo(f, nbEchantillon=1000, nbPart=1000) fig = plt.figure(figsize=(10, 8)) ax1 = fig.add_subplot(111) ax1.plot(Vm1, '.', color='darkorange', label='1000 échantillons de 100 particules') ax1.plot(Vm2, '.', color='dodgerblue', label='1000 échantillons de 1000 particules') ax1.plot([0, 1000], [5, 5], 'r-', label='Moyenne cible') ax1.legend() plt.show() ###Output _____no_output_____
ai-platform-unified/notebooks/unofficial/sdk/AI_Platform_(Unified)_SDK_AutoML_Text_Sentiment_Training.ipynb
###Markdown Feedback or issues?For any feedback or questions, please open an [issue](https://github.com/googleapis/python-aiplatform/issues). Vertex SDK for Python: AutoML Text Sentiment ExampleTo use this Jupyter notebook, copy the notebook to a Google Cloud Notebooks instance with Tensorflow installed and open it. You can run each step, or cell, and see its results. To run a cell, use Shift+Enter. Jupyter automatically displays the return value of the last line in each cell. For more information about running notebooks in Google Cloud Notebook, see the [Google Cloud Notebook guide](https://cloud.google.com/vertex-ai/docs/general/notebooks).This notebook demonstrate how to create an AutoML Text Sentiment Model, with a Vertex AI text dataset, and how to serve the model for online prediction.Note: you may incur charges for training, prediction, storage or usage of other GCP products in connection with testing this SDK Install Vertex SDK for PythonAfter the SDK installation the kernel will be automatically restarted. ###Code !pip3 uninstall -y google-cloud-aiplatform !pip3 install google-cloud-aiplatform import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) import sys if "google.colab" in sys.modules: from google.colab import auth auth.authenticate_user() ###Output _____no_output_____ ###Markdown Enter Your Project and GCS BucketEnter your Project Id in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. ###Code MY_PROJECT = "YOUR PROJECT ID" MY_STAGING_BUCKET = "gs://YOUR BUCKET" # bucket should be in same region as ucaip ###Output _____no_output_____ ###Markdown DatasetTo create a sentiment analysis model, we will use the open dataset from FigureEight that analyzes Twitter mentions of the allergy medicine Claritin. Please reference [AutoML Documentation](https://cloud.google.com/natural-language/automl/docs/quickstartmodel_objectives) for more information. ###Code # Text Classification IMPORT_FILE = "gs://cloud-samples-data/language/claritin.csv" SENTIMENT_MAX = 4 ###Output _____no_output_____ ###Markdown Initialize Vertex SDK for PythonInitialize the *client* for Vertex AI. ###Code from google.cloud import aiplatform aiplatform.init(project=MY_PROJECT, staging_bucket=MY_STAGING_BUCKET) ###Output _____no_output_____ ###Markdown Create Managed Text Dataset from CSV ###Code ds = aiplatform.TextDataset.create( display_name="text-sentiment", gcs_source=[IMPORT_FILE], import_schema_uri=aiplatform.schema.dataset.ioformat.text.sentiment, ) ds.resource_name ###Output _____no_output_____ ###Markdown Launch Training Job and get Model ###Code job = aiplatform.AutoMLTextTrainingJob( display_name="text-sentiment", prediction_type="sentiment", sentiment_max=SENTIMENT_MAX, ) # This will take around an hour to run model = job.run( dataset=ds, training_fraction_split=0.6, validation_fraction_split=0.2, test_fraction_split=0.2, model_display_name="text-sentiment", ) ###Output _____no_output_____ ###Markdown Deploy Model ###Code endpoint = model.deploy() ###Output _____no_output_____ ###Markdown Predict on Endpoint ###Code instances_list = [{"content": "Claritin is the absolute best"}] prediction = endpoint.predict(instances) prediction ###Output _____no_output_____ ###Markdown Feedback or issues?For any feedback or questions, please open an [issue](https://github.com/googleapis/python-aiplatform/issues). AI Platform (Unified) SDK: AutoML Text Sentiment ExampleTo use this Jupyter notebook, copy the notebook to an AI Platform(Unified) Notebooks instance with Tensorflow installed and open it. You can run each step, or cell, and see its results. To run a cell, use Shift+Enter. Jupyter automatically displays the return value of the last line in each cell. For more information about running notebooks in AI Platform(Unified) Notebook, see the [AI Platform(Unified) Notebook guide](https://cloud.google.com/ai-platform-unified/docs/general/notebooks).This notebook demonstrate how to create an AutoML Text Sentiment Model, with an AI Platform (Unified) Text Dataset, and how to serve the model for online prediction.Note: you may incur charges for training, prediction, storage or usage of other GCP products in connection with testing this SDK Install AI Platform (Unified) SDKAfter the SDK installation the kernel will be automatically restarted. ###Code !pip3 uninstall -y google-cloud-aiplatform !pip3 install google-cloud-aiplatform import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) import sys if "google.colab" in sys.modules: from google.colab import auth auth.authenticate_user() ###Output _____no_output_____ ###Markdown Enter Your Project and GCS BucketEnter your Project Id in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. ###Code MY_PROJECT = "YOUR PROJECT ID" MY_STAGING_BUCKET = "gs://YOUR BUCKET" # bucket should be in same region as ucaip ###Output _____no_output_____ ###Markdown DatasetTo create a sentiment analysis model, we will use the open dataset from FigureEight that analyzes Twitter mentions of the allergy medicine Claritin. Please reference [AutoML Documentation](https://cloud.google.com/natural-language/automl/docs/quickstartmodel_objectives) for more information. ###Code # Text Classification IMPORT_FILE = "gs://cloud-samples-data/language/claritin.csv" SENTIMENT_MAX = 4 ###Output _____no_output_____ ###Markdown Initialize AI Platform (Unified) SDKInitialize the *client* for AI Platform (Unified). ###Code from google.cloud import aiplatform aiplatform.init(project=MY_PROJECT, staging_bucket=MY_STAGING_BUCKET) ###Output _____no_output_____ ###Markdown Create Managed Text Dataset from CSV ###Code ds = aiplatform.TextDataset.create( display_name="text-sentiment", gcs_source=[IMPORT_FILE], import_schema_uri=aiplatform.schema.dataset.ioformat.text.sentiment, ) ds.resource_name ###Output _____no_output_____ ###Markdown Launch Training Job and get Model ###Code job = aiplatform.AutoMLTextTrainingJob( display_name="text-sentiment", prediction_type="sentiment", sentiment_max=SENTIMENT_MAX, ) # This will take around an hour to run model = job.run( dataset=ds, training_fraction_split=0.6, validation_fraction_split=0.2, test_fraction_split=0.2, model_display_name="text-sentiment", ) ###Output _____no_output_____ ###Markdown Deploy Model ###Code endpoint = model.deploy() ###Output _____no_output_____ ###Markdown Predict on Endpoint ###Code instances_list = [{"content": "Claritin is the absolute best"}] prediction = endpoint.predict(instances) prediction ###Output _____no_output_____
Instructions/Starter_Code/account_summary.ipynb
###Markdown Plaid Access TokenIn this section, you will use the plaid-python api to generate the correct authentication tokens to access data in the free developer Sandbox. This mimics how you might connect to your own account or a customer account, but due to privacy issues, this homework will only require connecting to and analyzing the fake data from the developer sandbox that Plaid provides. Complete the following steps to generate an access token:1. Create a client to connect to plaid2. Use the client to generate a public token and request the following items: ['transactions', 'income', 'assets']3. Exchange the public token for an access token4. Test the access token by requesting and printing the available test accounts 1. Create a client to connect to plaid ###Code INSTITUTION_ID = "ins_109508" api-key = 'tjhwakjgfhkasjdhfsajdflsj$$$' api_request = 'https://www.plaid.com/api/' ###Output _____no_output_____ ###Markdown 2. Generate a public token ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown 3. Exchange the public token for an access token ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown 4. Fetch Accounts ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown --- Account Transactions with PlaidIn this section, you will use the Plaid Python SDK to connect to the Developer Sandbox account and grab a list of transactions. You will need to complete the following steps:1. Use the access token to fetch the transactions for the last 90 days2. Print the categories for each transaction type3. Create a new DataFrame using the following fields from the JSON transaction data: `date, name, amount, category`. (For categories with more than one label, just use the first category label in the list)4. Convert the data types to the appropriate types (i.e. datetimeindex for the date and float for the amount) 1. Fetch the Transactions for the last 90 days ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown 2. Print the categories for each transaction ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown 3. Create a new DataFrame using the following fields from the JSON transaction data: date, name, amount, category. (For categories with more than one label, just use the first category label in the list) ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown 4. Convert the data types to the appropriate types (i.e. datetimeindex for the date and float for the amount) ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown --- Income Analysis with PlaidIn this section, you will use the Plaid Sandbox to complete the following:1. Determine the previous year's gross income and print the results2. Determine the current monthly income and print the results3. Determine the projected yearly income and print the results ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown --- Budget AnalysisIn this section, you will use the transactions DataFrame to analyze the customer's budget1. Calculate the total spending per category and print the results (Hint: groupby or count transactions per category)2. Generate a bar chart with the number of transactions for each category 3. Calculate the expenses per month4. Plot the total expenses per month Calculate the expenses per category ###Code # YOUR CODE HERE ###Output _____no_output_____ ###Markdown Calculate the expenses per month ###Code # YOUR CODE HERE ###Output _____no_output_____
platypus-simulate/Platypus_simulator.ipynb
###Markdown Simulator for *PLATYPUS* and *SPATZ*The purpose of this notebook is to simulate a measurement on one of these two instruments at ACNS. Its deficiency is that it doesn't take into account systematic errors (such as misalignment, gravity effects) and can't do background subtractions. The latter is done during an experiment to decrease $R_{min}$, which has the effect of increasing error bar size. ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt from tof_simulator import ReflectSimulator, SpectrumDist from refnx.reflect import Slab, Structure, SLD, ReflectModel from refnx.dataset import ReflectDataset ###Output _____no_output_____ ###Markdown Since *PLATYPUS* and *SPATZ* are time-of-flight instruments it's necessary to have a wavelength spectrum to be able to generate neutrons of different wavelength onto the sample. We take a direct beam spectrum and convert it into a probability distribution The `SpectrumDist` object is a `scipy.stats` like object to describe the neutron intensity as a function of wavelength. You can use the `pdf, cdf, ppf, rvs` methods like you would a `scipy.stats` distribution. Of particular interest is the `rvs` method which randomly samples neutrons whose distribution obeys the direct beam spectrum. Random variates are generated the `rv_continuous` superclass by classical generation of uniform noise coupled with the `ppf`. `ppf` is approximated by linear interpolation of `q` into a pre-calculated inverse `cdf`. The `ReflectSimulator` class generates neutrons whose distribution follows the wavelength and angular resolution of the instrument. If uses those neutrons to calculate a reflectivity pattern based on a user supplied `ReflectModel`. The resolution properties of `ReflectModel` are ignored. Generate the structure you want to simulate: ###Code air = SLD(0) sio2 = SLD(3.47) si = SLD(2.07) s = air | sio2(225, 3) | si(0, 3) model = ReflectModel(s, bkg=5e-7) ###Output _____no_output_____ ###Markdown Create the instrument simulator and sample. Here we sample at two angles of incidence. This would correspond to *roughly* 1200 s and 3600 s on Platypus. It's possible that the acquisition time/ of samples correspondence is up to an order of magnitude out. ###Code simulator0 = ReflectSimulator(model, 0.65, rebin=2) simulator1 = ReflectSimulator(model, 3, rebin=2) simulator0.run(2400000) simulator1.run(150000000) ###Output _____no_output_____ ###Markdown The `ReflectSimulator.reflectivity` attribute is a `ReflectDataset`, which can have additional data spliced onto it. ###Code data = ReflectDataset() data += simulator0.reflectivity data += simulator1.reflectivity data.save('sim.txt') ebc = plt.errorbar(data.x, data.y, yerr=data.y_err) ebc[0].set_linewidth(0) plt.plot(data.x, model(data.x, x_err=data.x_err)) plt.yscale('log') ###Output _____no_output_____ ###Markdown Simulator for *PLATYPUS* and *SPATZ*The purpose of this notebook is to simulate a measurement on one of these two instruments at ACNS. Its deficiency is that it doesn't take into account systematic errors (such as misalignment, gravity effects) and can't do background subtractions. The latter is done during an experiment to decrease $R_{min}$, which has the effect of increasing error bar size. ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt from tof_simulator import ReflectSimulator, SpectrumDist from refnx.reflect import Slab, Structure, SLD, ReflectModel from refnx.dataset import ReflectDataset ###Output _____no_output_____ ###Markdown Since *PLATYPUS* and *SPATZ* are time-of-flight instruments it's necessary to have a wavelength spectrum to be able to generate neutrons of different wavelength onto the sample. We take a direct beam spectrum and convert it into a probability distribution The `SpectrumDist` object is a `scipy.stats` like object to describe the neutron intensity as a function of wavelength. You can use the `pdf, cdf, ppf, rvs` methods like you would a `scipy.stats` distribution. Of particular interest is the `rvs` method which randomly samples neutrons whose distribution obeys the direct beam spectrum. Random variates are generated the `rv_continuous` superclass by classical generation of uniform noise coupled with the `ppf`. `ppf` is approximated by linear interpolation of `q` into a pre-calculated inverse `cdf`. The `ReflectSimulator` class generates neutrons whose distribution follows the wavelength and angular resolution of the instrument. If uses those neutrons to calculate a reflectivity pattern based on a user supplied `ReflectModel`. The resolution properties of `ReflectModel` are ignored. Generate the structure you want to simulate: ###Code air = SLD(0) sio2 = SLD(3.47) si = SLD(2.07) s = air | sio2(225, 3) | si(0, 3) model = ReflectModel(s, bkg=5e-7) ###Output _____no_output_____ ###Markdown Create the instrument simulator and sample. Here we sample at two angles of incidence. This would correspond to *roughly* 1200 s and 3600 s on Platypus. It's possible that the acquisition time/ of samples correspondence is up to an order of magnitude out. ###Code simulator0 = ReflectSimulator(model, 0.65, rebin=2) simulator1 = ReflectSimulator(model, 3, rebin=2) simulator0.sample(2400000) simulator0.sample_direct(1500000) for i in range(150): # 150e6 beam monitor counts simulator1.sample(1000000) simulator1.sample_direct(1500000) ###Output _____no_output_____ ###Markdown The `ReflectSimulator.reflectivity` attribute is a `ReflectDataset`, which can have additional data spliced onto it. ###Code data = ReflectDataset() data += simulator0.reflectivity data += simulator1.reflectivity data.save('sim.txt') ebc = plt.errorbar(data.x, data.y, yerr=data.y_err) ebc[0].set_linewidth(0) plt.plot(data.x, model(data.x, x_err=data.x_err)) plt.yscale('log') plt.plot(data.x, (data.y - model(data.x, x_err=data.x_err)) / data.y_err) plt.xscale('log'); ###Output _____no_output_____
notebooks/06.0-neural-networks/starling_figs/.ipynb_checkpoints/Starling-128-GAN-larger-checkpoint.ipynb
###Markdown J Diagram ###Code from avgn.visualization.projections import scatter_spec from avgn.utils.general import save_fig from avgn.utils.paths import FIGURE_DIR, ensure_dir ensure_dir(FIGURE_DIR / 'networks' / 'starling128') gen_func = model.generate interp_len = 5 dset_iter = iter(dataset) x1 = np.reshape(next(dset_iter)[0] / 255, (1,DIMS[0],DIMS[1],1)) x2 = np.reshape(next(dset_iter)[0] / 255, (1,DIMS[0],DIMS[1],1)) x3 = np.reshape(next(dset_iter)[0] / 255, (1,DIMS[0],DIMS[1],1)) exdat = np.vstack([x1, x2, x3]) fig, axs = plt.subplots(ncols=3, figsize=(15,5)) axs[0].matshow(np.squeeze(x1), origin='lower') axs[1].matshow(np.squeeze(x2), origin='lower') axs[2].matshow(np.squeeze(x3), origin='lower') # make a whole bunch of reconstructions # get z all_z = [] all_x = [] for i in tqdm(range(100)): z_list = tf.random.normal(shape=(BATCH_SIZE, N_Z)) x_list = gen_func(z_list).numpy() x_list = np.reshape(x_list, (len(x_list), np.product(DIMS))) all_x.append(x_list) all_z.append(z_list) all_x = np.vstack(all_x) all_z = np.vstack(all_z) np.shape(all_x), np.shape(all_z) x1_dist = np.mean(np.abs(x1.flatten() - all_x), axis = 1) x2_dist = np.mean(np.abs(x2.flatten() - all_x), axis = 1) x3_dist = np.mean(np.abs(x3.flatten() - all_x), axis = 1) fig, axs = plt.subplots(ncols=3, nrows=2, figsize=(15,10)) axs[0,0].matshow(np.squeeze(x1), origin='lower') axs[0,1].matshow(np.squeeze(x2), origin='lower') axs[0,2].matshow(np.squeeze(x3), origin='lower') axs[1,0].matshow(all_x[np.argmin(x1_dist)].reshape((DIMS[0],DIMS[1])), vmin=0, origin='lower') axs[1,1].matshow(all_x[np.argmin(x2_dist)].reshape((DIMS[0],DIMS[1])), vmin=0, origin='lower') axs[1,2].matshow(all_x[np.argmin(x3_dist)].reshape((DIMS[0],DIMS[1])), vmin=0, origin='lower') pt1 = all_z[np.argmin(x1_dist)] pt2 = all_z[np.argmin(x2_dist)] pt3 = all_z[np.argmin(x3_dist)] #pt1x,pt2x,pt3x =gen_func(tf.stack([pt1,pt2,pt3])) #get proportions z_list = [] for ci, C in enumerate(np.linspace(0, 1, interp_len)): for bi, B in enumerate(np.linspace(0, 1, interp_len)): A = 1 - C - B z_list.append( C * pt1 + B * pt2 + A * pt3 ) z_list = np.vstack(z_list) # get X x_list = gen_func(z_list).numpy() # make diagram Jdiagram = np.ones((x_list.shape[1]*(interp_len), x_list.shape[2]*(interp_len+2), x_list.shape[3])) np.shape(Jdiagram) #populate i = 0 for ci, C in enumerate(np.linspace(0, 1, interp_len)): for bi, B in enumerate(np.linspace(0, 1, interp_len)): Jdiagram[(interp_len -1 - bi)*x_list.shape[1]:((interp_len - bi))*x_list.shape[1], (ci+1)*x_list.shape[2]:(ci+2)*x_list.shape[2], :] = x_list[i] i+=1 Jdiagram[(interp_len - 1)*x_list.shape[1]: (interp_len)*x_list.shape[1], :x_list.shape[2], :] = x3 Jdiagram[(interp_len - 1)*x_list.shape[1]: (interp_len)*x_list.shape[1], (interp_len +1)*x_list.shape[2]: (interp_len+2)*x_list.shape[2] , :] = x1 Jdiagram[: x_list.shape[1], :x_list.shape[2], :] = x2 fig, ax = plt.subplots(figsize=(10,10)) ax.matshow(np.squeeze(Jdiagram), vmin = 0, cmap=plt.cm.afmhot, origin = 'lower') ax.axis('off') #save_fig(FIGURE_DIR / 'networks' / 'starling128'/ ('GAN_JDiagram3_128'), dpi=300, save_jpg=True) ###Output _____no_output_____ ###Markdown plot 3 way interpolation ###Code gen_func = model.generate interp_len = 5 from avgn.visualization.projections import scatter_spec from avgn.utils.general import save_fig from avgn.utils.paths import FIGURE_DIR, ensure_dir # get z pt1,pt2,pt3 = tf.random.normal(shape=(BATCH_SIZE, N_Z))[:3] #pt1x,pt2x,pt3x =gen_func(tf.stack([pt1,pt2,pt3])) #get proportions z_list = [] for ci, C in enumerate(np.linspace(0, 1, interp_len)): for bi, B in enumerate(np.linspace(0, 1, interp_len)): A = 1 - C - B z_list.append( C * pt1 + B * pt2 + A * pt3 ) z_list = np.vstack(z_list) # get X x_list = gen_func(z_list).numpy() # make diagram Jdiagram = np.zeros((x.shape[1]*interp_len, x.shape[2]*interp_len, x.shape[3])) np.shape(Jdiagram) #populate i = 0 for ci, C in enumerate(np.linspace(0, 1, interp_len)): for bi, B in enumerate(np.linspace(0, 1, interp_len)): Jdiagram[bi*x.shape[1]:(bi+1)*x.shape[1], ci*x.shape[2]:(ci+1)*x.shape[2], :] = x_list[i] i+=1 ensure_dir(FIGURE_DIR / 'networks' / 'starling128') fig, ax = plt.subplots(figsize=(10,10)) ax.matshow(np.squeeze(Jdiagram), vmin = 0, cmap=plt.cm.afmhot, origin = 'lower') ax.axis('off') save_fig(FIGURE_DIR / 'networks' / 'starling128'/ ('GAN_JDiagram_128'), dpi=300, save_jpg=True) ### reconstruct back into x ###Output _____no_output_____ ###Markdown Plot samples from latent ###Code z = model.encode(example_data).numpy() xmax, ymax = [3,3] xmin, ymin = [-3,-3] print(xmax, ymax, xmin, ymin) # sample from grid nx = ny= 10 meshgrid = np.meshgrid(np.linspace(xmin, xmax, nx), np.linspace(ymin, ymax, ny)) meshgrid = np.array(meshgrid).reshape(2, nx*ny).T x_grid = model.generate(meshgrid) x_grid = x_grid.numpy().reshape(nx, ny, DIMS[0], DIMS[1], DIMS[2]) # fill canvas canvas = np.zeros((nx*DIMS[0], ny*DIMS[1])) for xi in range(nx): for yi in range(ny): canvas[xi*DIMS[0]:xi*DIMS[0]+DIMS[0], yi*DIMS[1]:yi*DIMS[1]+DIMS[1]] = x_grid[xi, yi,:,:,:].squeeze() fig, ax = plt.subplots(figsize=(15,10)) ax.matshow(canvas, vmin = 0, cmap=plt.cm.Greys, origin = 'lower') ax.axis('off') ###Output _____no_output_____ ###Markdown plot dataset ###Code all_x = [] all_z = [] all_indv = [] all_labels = [] for batch, train_x in tqdm( zip(range(N_TRAIN_BATCHES), train_dataset), total=N_TRAIN_BATCHES ): x = train_x[0] all_x.append(x) x = tf.cast(tf.reshape(x, [BATCH_SIZE] + list(DIMS)), tf.float32) / 255 #model.train_net(x) all_z.append(model.encode(x).numpy()) all_indv.append(train_x[3].numpy()) all_labels.append(train_x[2].numpy()) all_z = np.vstack(all_z) all_indv = np.concatenate(all_indv) all_labels = np.concatenate(all_labels) all_x = np.concatenate(all_x) all_x = np.reshape(all_x, [len(all_x)] + list(DIMS[:2])) from avgn.visualization.projections import scatter_spec np.shape(all_z) plt.scatter(all_z[:,0], all_z[:,1], s=1, c='k', alpha=0.1) scatter_spec( np.vstack(all_z), all_x, column_size=15, x_range = [-4,4], y_range = [-4,4], pal_color="hls", color_points=False, enlarge_points=20, figsize=(10, 10), scatter_kwargs = { 'labels': list(all_labels), 'alpha':0.25, 's': 1, 'show_legend': False }, matshow_kwargs = { 'cmap': plt.cm.Greys }, line_kwargs = { 'lw':1, 'ls':"solid", 'alpha':0.25, }, draw_lines=True ); ###Output _____no_output_____
docs/beta/notebooks/Carver.ipynb
###Markdown Carving Unit TestsSo far, we have always generated _system input_, i.e. data that the program as a whole obtains via its input channels. If we are interested in testing only a small set of functions, having to go through the system can be very inefficient. This chapter introduces a technique known as _carving_, which, given a system test, automatically extracts a set of _unit tests_ that replicate the calls seen during the unit test. The key idea is to _record_ such calls such that we can _replay_ them later – as a whole or selectively. On top, we also explore how to synthesize API grammars from carved unit tests; this means that we can _synthesize API tests without having to write a grammar at all._ **Prerequisites*** Carving makes use of dynamic traces of function calls and variables, as introduced in the [chapter on configuration fuzzing](ConfigurationFuzzer.ipynb).* Using grammars to test units was introduced in the [chapter on API fuzzing](APIFuzzer.ipynb). System Tests vs Unit TestsRemember the URL grammar introduced for [grammar fuzzing](Grammars.ipynb)? With such a grammar, we can happily test a Web browser again and again, checking how it reacts to arbitrary page requests.Let us define a very simple "web browser" that goes and downloads the content given by the URL. ###Code import urllib.parse import urllib.request import fuzzingbook_utils def webbrowser(url): """Download the http/https resource given by the URL""" response = urllib.request.urlopen(url) if response.getcode() == 200: contents = response.read() return contents.decode("utf8") ###Output _____no_output_____ ###Markdown Let us apply this on [fuzzingboook.org](https://www.fuzzingbook.org/) and measure the time, using the [Timer class](Timer.ipynb): ###Code from Timer import Timer with Timer() as webbrowser_timer: fuzzingbook_contents = webbrowser( "http://www.fuzzingbook.org/html/Fuzzer.html") print("Downloaded %d bytes in %.2f seconds" % (len(fuzzingbook_contents), webbrowser_timer.elapsed_time())) fuzzingbook_contents[:100] ###Output _____no_output_____ ###Markdown A full webbrowser, of course, would also render the HTML content. We can achieve this using these commands (but we don't, as we do not want to replicate the entire Web page here):```pythonfrom IPython.core.display import HTML, displayHTML(fuzzingbook_contents)``` Having to start a whole browser (or having it render a Web page) again and again means lots of overhead, though – in particular if we want to test only a subset of its functionality. In particular, after a change in the code, we would prefer to test only the subset of functions that is affected by the change, rather than running the well-tested functions again and again. Let us assume we change the function that takes care of parsing the given URL and decomposing it into the individual elements – the scheme ("http"), the network location (`"www.fuzzingbook.com"`), or the path (`"/html/Fuzzer.html"`). This function is named `urlparse()`: ###Code from urllib.parse import urlparse urlparse('https://www.fuzzingbook.com/html/Carver.html') ###Output _____no_output_____ ###Markdown You see how the individual elements of the URL – the _scheme_ (`"http"`), the _network location_ (`"www.fuzzingbook.com"`), or the path (`"//html/Carver.html"`) are all properly identified. Other elements (like `params`, `query`, or `fragment`) are empty, because they were not part of our input. The interesting thing is that executing only `urlparse()` is orders of magnitude faster than running all of `webbrowser()`. Let us measure the factor: ###Code runs = 1000 with Timer() as urlparse_timer: for i in range(runs): urlparse('https://www.fuzzingbook.com/html/Carver.html') avg_urlparse_time = urlparse_timer.elapsed_time() / 1000 avg_urlparse_time ###Output _____no_output_____ ###Markdown Compare this to the time required by the webbrowser ###Code webbrowser_timer.elapsed_time() ###Output _____no_output_____ ###Markdown The difference in time is huge: ###Code webbrowser_timer.elapsed_time() / avg_urlparse_time ###Output _____no_output_____ ###Markdown Hence, in the time it takes to run `webbrowser()` once, we can have _tens of thousands_ of executions of `urlparse()` – and this does not even take into account the time it takes the browser to render the downloaded HTML, to run the included scripts, and whatever else happens when a Web page is loaded. Hence, strategies that allow us to test at the _unit_ level are very promising as they can save lots of overhead. Carving Unit TestsTesting methods and functions at the unit level requires a very good understanding of the individual units to be tested as well as their interplay with other units. Setting up an appropriate infrastructure and writing unit tests by hand thus is demanding, yet rewarding. There is, however, an interesting alternative to writing unit tests by hand. The technique of _carving_ automatically _converts system tests into unit tests_ by means of recording and replaying function calls:1. During a system test (given or generated), we _record_ all calls into a function, including all arguments and other variables the function reads.2. From these, we synthesize a self-contained _unit test_ that reconstructs the function call with all arguments.3. This unit test can be executed (replayed) at any time with high efficiency.In the remainder of this chapter, let us explore these steps. Recording CallsOur first challenge is to record function calls together with their arguments. (In the interest of simplicity, we restrict ourself to arguments, ignoring any global variables or other non-arguments that are read by the function.) To record calls and arguments, we use the mechanism [we introduced for coverage](Coverage.ipynb): By setting up a tracer function, we track all calls into individual functions, also saving their arguments. Just like `Coverage` objects, we want to use `Carver` objects to be able to be used in conjunction with the `with` statement, such that we can trace a particular code block:```pythonwith Carver() as carver: function_to_be_traced()c = carver.calls()```The initial definition supports this construct: ###Code import sys class Carver(object): def __init__(self, log=False): self._log = log self.reset() def reset(self): self._calls = {} # Start of `with` block def __enter__(self): self.original_trace_function = sys.gettrace() sys.settrace(self.traceit) return self # End of `with` block def __exit__(self, exc_type, exc_value, tb): sys.settrace(self.original_trace_function) ###Output _____no_output_____ ###Markdown The actual work takes place in the `traceit()` method, which records all calls in the `_calls` attribute. First, we define two helper functions: ###Code import inspect def get_qualified_name(code): """Return the fully qualified name of the current function""" name = code.co_name module = inspect.getmodule(code) if module is not None: name = module.__name__ + "." + name return name def get_arguments(frame): """Return call arguments in the given frame""" # When called, all arguments are local variables arguments = [(var, frame.f_locals[var]) for var in frame.f_locals] arguments.reverse() # Want same order as call return arguments class CallCarver(Carver): def add_call(self, function_name, arguments): """Add given call to list of calls""" if function_name not in self._calls: self._calls[function_name] = [] self._calls[function_name].append(arguments) # Tracking function: Record all calls and all args def traceit(self, frame, event, arg): if event != "call": return None code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) arguments = get_arguments(frame) self.add_call(function_name, arguments) if qualified_name != function_name: self.add_call(qualified_name, arguments) if self._log: print(simple_call_string(function_name, arguments)) return None ###Output _____no_output_____ ###Markdown Finally, we need some convenience functions to access the calls: ###Code class CallCarver(CallCarver): def calls(self): """Return a dictionary of all calls traced.""" return self._calls def arguments(self, function_name): """Return a list of all arguments of the given function as (VAR, VALUE) pairs. Raises an exception if the function was not traced.""" return self._calls[function_name] def called_functions(self, qualified=False): """Return all functions called.""" if qualified: return [function_name for function_name in self._calls.keys() if function_name.find('.') >= 0] else: return [function_name for function_name in self._calls.keys() if function_name.find('.') < 0] ###Output _____no_output_____ ###Markdown Recording my_sqrt() Let's try out our new `Carver` class – first on a very simple function: ###Code from Intro_Testing import my_sqrt with CallCarver() as sqrt_carver: my_sqrt(2) my_sqrt(4) ###Output _____no_output_____ ###Markdown We can retrieve all calls seen... ###Code sqrt_carver.calls() sqrt_carver.called_functions() ###Output _____no_output_____ ###Markdown ... as well as the arguments of a particular function: ###Code sqrt_carver.arguments("my_sqrt") ###Output _____no_output_____ ###Markdown We define a convenience function for nicer printing of these lists: ###Code def simple_call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string""" return function_name + "(" + \ ", ".join([var + "=" + repr(value) for (var, value) in argument_list]) + ")" for function_name in sqrt_carver.called_functions(): for argument_list in sqrt_carver.arguments(function_name): print(simple_call_string(function_name, argument_list)) ###Output my_sqrt(x=2) my_sqrt(x=4) __exit__(self=<__main__.CallCarver object at 0x10df8a6d8>, exc_type=None, exc_value=None, tb=None) ###Markdown This is a syntax we can directly use to invoke `my_sqrt()` again: ###Code eval("my_sqrt(x=2)") ###Output _____no_output_____ ###Markdown Carving urlparse() What happens if we apply this to `webbrowser()`? ###Code with CallCarver() as webbrowser_carver: webbrowser("http://www.example.com") ###Output _____no_output_____ ###Markdown We see that retrieving a URL from the Web requires quite some functionality: ###Code print(webbrowser_carver.called_functions(qualified=True)) ###Output ['urllib.request.urlopen', 'urllib.request.open', 'urllib.request.__init__', 'urllib.request.full_url', 'urllib.parse.unwrap', 'urllib.parse.splittag', 'urllib.request._parse', 'urllib.parse.splittype', 'urllib.parse.splithost', 'urllib.parse.unquote', 'urllib.request.data', 'urllib.request.request_host', 'urllib.parse.urlparse', 'urllib.parse._coerce_args', 'urllib.parse.urlsplit', 'urllib.parse.clear_cache', 'urllib.parse._splitnetloc', 'urllib.parse._noop', 'urllib.request.do_request_', 'urllib.request.has_proxy', 'urllib.request.has_header', 'urllib.request.add_unredirected_header', 'urllib.request._open', 'urllib.request._call_chain', 'urllib.request.http_open', 'urllib.request.do_open', 'http.client.__init__', 'http.client._get_hostport', 'http.client.set_debuglevel', 'urllib.request.<genexpr>', 'urllib.request.get_method', 'http.client.request', 'http.client._send_request', 'http.client.<genexpr>', 'http.client.putrequest', 'http.client._output', 'http.client.putheader', 'http.client._get_content_length', 'http.client.endheaders', 'http.client._send_output', 'http.client.send', 'http.client.connect', 'socket.create_connection', 'socket.getaddrinfo', 'encodings.idna.encode', 'socket._intenum_converter', 'enum.__call__', 'enum.__new__', 'socket.__init__', 'http.client.getresponse', 'socket.makefile', 'socket.readable', 'http.client.begin', 'http.client._read_status', 'socket.readinto', 'http.client.parse_headers', 'email.parser.__init__', 'email.parser.parsestr', 'email.parser.parse', 'email.feedparser.__init__', 'email.message.__init__', 'email.feedparser.feed', 'email.feedparser.push', 'email.feedparser.pushlines', 'email.feedparser._call_parse', 'email.feedparser._parsegen', 'email.feedparser._new_message', 'email.feedparser.__iter__', 'email.feedparser.__next__', 'email.feedparser.readline', 'email.feedparser._parse_headers', 'email._policybase.header_source_parse', 'email.message.set_raw', 'email.message.get_content_type', 'email.message.get', 'email._policybase.header_fetch_parse', 'email._policybase._sanitize_header', 'email.utils._has_surrogates', 'email.message._splitparam', 'email.message.get_content_maintype', 'email.feedparser.close', 'email.message.set_payload', 'email.feedparser._pop_message', 'http.client._check_close', 'http.client.close', 'socket.close', 'urllib.request.get_full_url', 'urllib.request.http_response', 'http.client.info', 'http.client.getcode', 'http.client.read', 'http.client._safe_read', 'http.client._close_conn', 'socket._decref_socketios', 'socket._real_close'] ###Markdown Among several other functions, we also have a call to `urlparse()`: ###Code urlparse_argument_list = webbrowser_carver.arguments("urllib.parse.urlparse") urlparse_argument_list ###Output _____no_output_____ ###Markdown Again, we can convert this into a well-formatted call: ###Code urlparse_call = simple_call_string("urlparse", urlparse_argument_list[0]) urlparse_call ###Output _____no_output_____ ###Markdown Again, we can re-execute this call: ###Code eval(urlparse_call) ###Output _____no_output_____ ###Markdown We now have successfully carved the call to `urlparse()` out of the `webbrowser()` execution. Replaying Calls Replaying calls in their entirety and in all generality is tricky, as there are several challenges to be addressed. These include:1. We need to be able to _access_ individual functions. If we access a function by name, the name must be in scope. If the name is not visible (for instance, because it is a name internal to the module), we must make it visible.2. Any _resources_ accessed outside of arguments must be recorded and reconstructed for replay as well. This can be difficult if variables refer to external resources such as files or network resources.3. _Complex objects_ must be reconstructed as well.These constraints make carving hard or even impossible if the function to be tested interacts heavily with its environment. To illustrate these issues, consider the `email.parser.parse()` method that is invoked in `webbrowser()`: ###Code email_parse_argument_list = webbrowser_carver.arguments("email.parser.parse") ###Output _____no_output_____ ###Markdown Calls to this method look like this: ###Code email_parse_call = simple_call_string( "email.parser.parse", email_parse_argument_list[0]) email_parse_call ###Output _____no_output_____ ###Markdown We see that `email.parser.parse()` is part of a `email.parser.Parser` object and it gets a `StringIO` object. Both are non-primitive values. How could we possibly reconstruct them? Serializing ObjectsThe answer to the problem of complex objects lies in creating a _persistent_ representation that can be _reconstructed_ at later points in time. This process is known as _serialization_; in Python, it is also known as _pickling_. The `pickle` module provides means to create a serialized representation of an object. Let us apply this on the `email.parser.Parser` object we just found: ###Code import pickle parser_object = email_parse_argument_list[0][0][1] parser_object pickled = pickle.dumps(parser_object) pickled ###Output _____no_output_____ ###Markdown From this string representing the serialized `email.parser.Parser` object, we can recreate the Parser object at any time: ###Code unpickled_parser_object = pickle.loads(pickled) unpickled_parser_object ###Output _____no_output_____ ###Markdown The serialization mechanism allows us to produce a representation for all objects passed as parameters (assuming they can be pickled, that is). We can now extend the `simple_call_string()` function such that it automatically pickles objects. Additionally, we set it up such that if the first parameter is named `self` (i.e., it is a class method), we make it a method of the `self` object. ###Code def call_value(value): value_as_string = repr(value) if value_as_string.find('<') >= 0: # Complex object value_as_string = "pickle.loads(" + repr(pickle.dumps(value)) + ")" return value_as_string def call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string, pickling complex objects""" if len(argument_list) > 0: (first_var, first_value) = argument_list[0] if first_var == "self": # Make this a method call method_name = function_name.split(".")[-1] function_name = call_value(first_value) + "." + method_name argument_list = argument_list[1:] return function_name + "(" + \ ", ".join([var + "=" + call_value(value) for (var, value) in argument_list]) + ")" ###Output _____no_output_____ ###Markdown Let us apply the extended `call_string()` method to create a call for `email.parser.parse()`, including pickled objects: ###Code call = call_string("email.parser.parse", email_parse_argument_list[0]) print(call) ###Output pickle.loads(b'\x80\x03cemail.parser\nParser\nq\x00)\x81q\x01}q\x02(X\x06\x00\x00\x00_classq\x03chttp.client\nHTTPMessage\nq\x04X\x06\x00\x00\x00policyq\x05cemail._policybase\nCompat32\nq\x06)\x81q\x07ub.').parse(fp=pickle.loads(b'\x80\x03c_io\nStringIO\nq\x00)\x81q\x01(XY\x01\x00\x00Accept-Ranges: bytes\r\nCache-Control: max-age=604800\r\nContent-Type: text/html; charset=UTF-8\r\nDate: Tue, 11 Dec 2018 13:16:18 GMT\r\nEtag: "1541025663+gzip"\r\nExpires: Tue, 18 Dec 2018 13:16:18 GMT\r\nLast-Modified: Fri, 09 Aug 2013 23:54:35 GMT\r\nServer: ECS (dca/24D5)\r\nVary: Accept-Encoding\r\nX-Cache: HIT\r\nContent-Length: 1270\r\nConnection: close\r\n\r\nq\x02X\x01\x00\x00\x00\nq\x03MY\x01Ntq\x04b.'), headersonly=False) ###Markdown With this call involvimng the pickled object, we can now re-run the original call and obtain a valid result: ###Code eval(call) ###Output _____no_output_____ ###Markdown All CallsSo far, we have seen only one call of `webbrowser()`. How many of the calls within `webbrowser()` can we actually carve and replay? Let us try this out and compute the numbers. ###Code import traceback import enum import socket all_functions = set(webbrowser_carver.called_functions(qualified=True)) call_success = set() run_success = set() exceptions_seen = set() for function_name in webbrowser_carver.called_functions(qualified=True): for argument_list in webbrowser_carver.arguments(function_name): try: call = call_string(function_name, argument_list) call_success.add(function_name) result = eval(call) run_success.add(function_name) except Exception as exc: exceptions_seen.add(repr(exc)) # print("->", call, file=sys.stderr) # traceback.print_exc() # print("", file=sys.stderr) continue print("%d/%d calls (%.2f%%) successfully created and %d/%d calls (%.2f%%) successfully ran" % ( len(call_success), len(all_functions), len( call_success) * 100 / len(all_functions), len(run_success), len(all_functions), len(run_success) * 100 / len(all_functions))) ###Output 83/95 calls (87.37%) successfully created and 47/95 calls (49.47%) successfully ran ###Markdown About half of the calls succeed. Let us take a look into some of the error messages we get: ###Code for i in range(10): print(list(exceptions_seen)[i]) ###Output SyntaxError('invalid syntax', ('<string>', 1, 13, "http.client.<genexpr>(k='Connection', .0=pickle.loads(b'\\x80\\x03cbuiltins\\niter\\nq\\x00]q\\x01\\x85q\\x02Rq\\x03.'))")) SyntaxError('invalid syntax', ('<string>', 1, 13, "http.client.<genexpr>(k='User-Agent', .0=pickle.loads(b'\\x80\\x03cbuiltins\\niter\\nq\\x00]q\\x01\\x85q\\x02Rq\\x03.'))")) NameError("name 'ParseResult' is not defined",) TypeError('an integer is required (got type object)',) TypeError("'NoneType' object is not callable",) NameError("name 'Compat32' is not defined",) TypeError("cannot serialize '_io.BufferedReader' object",) NameError("name 'email' is not defined",) NameError("name 'SplitResult' is not defined",) SyntaxError('invalid syntax', ('<string>', 1, 13, "http.client.<genexpr>(.0=pickle.loads(b'\\x80\\x03cbuiltins\\niter\\nq\\x00]q\\x01\\x85q\\x02Rq\\x03.'))")) ###Markdown We see that:* **A large majority of calls could be converted into call strings.** If this is not the case, this is mostly due to having unserialized objects being passed.* **About half of the calls could be executed.** The error messages for the failing runs are varied; the most frequent being that some internal name is invoked that is not in scope. Our carving mechanism should be taken with a grain of salt: We still do not cover the situation where external variables and values (such as global variables) are being accessed, and the serialization mechanism cannot recreate external resources. Still, if the function of interest falls among those that _can_ be carved and replayed, we can very effectively re-run its calls with their original arguments. Mining API Grammars from Carved CallsSo far, we have used carved calls to replay exactly the same invocations as originally encountered. However, we can also _mutate_ carved calls to effectively fuzz APIs with previously recorded arguments.The general idea is as follows:1. First, we record all calls of a specific function from a given execution of the program.2. Second, we create a grammar that incorporates all these calls, with separate rules for each argument and alternatives for each value found; this allows us to produce calls that arbitrarily _recombine_ these arguments.Let us explore these steps in the following sections. From Calls to GrammarsLet us start with an example. The `power(x, y)` function returns $x^y$; it is but a wrapper around the equivalent `math.pow()` function. (Since `power()` is defined in Python, we can trace it – in contrast to `math.pow()`, which is implemented in C.) ###Code import math def power(x, y): return math.pow(x, y) ###Output _____no_output_____ ###Markdown Let us invoke `power()` while recording its arguments: ###Code with CallCarver() as power_carver: z = power(1, 2) z = power(3, 4) power_carver.arguments("power") ###Output _____no_output_____ ###Markdown From this list of recorded arguments, we could now create a grammar for the `power()` call, with `x` and `y` expanding into the values seen: ###Code from Grammars import START_SYMBOL, is_valid_grammar, new_symbol POWER_GRAMMAR = { "<start>": ["power(<x>, <y>)"], "<x>": ["1", "3"], "<y>": ["2", "4"] } assert is_valid_grammar(POWER_GRAMMAR) ###Output _____no_output_____ ###Markdown When fuzzing with this grammar, we then get arbitrary combinations of `x` and `y`; aiming for coverage will ensure that all values are actually tested at least once: ###Code from GrammarCoverageFuzzer import GrammarCoverageFuzzer power_fuzzer = GrammarCoverageFuzzer(POWER_GRAMMAR) [power_fuzzer.fuzz() for i in range(5)] ###Output _____no_output_____ ###Markdown What we need is a method to automatically convert the arguments as seen in `power_carver` to the grammar as seen in `POWER_GRAMMAR`. This is what we define in the next section. A Grammar Miner for CallsWe introduce a class `CallGrammarMiner`, which, given a `Carver`, automatically produces a grammar from the calls seen. To initialize, we pass the carver object: ###Code class CallGrammarMiner(object): def __init__(self, carver, log=False): self.carver = carver self.log = log ###Output _____no_output_____ ###Markdown Initial GrammarThe initial grammar produces a single call. The possible `` expansions are to be constructed later: ###Code import copy class CallGrammarMiner(CallGrammarMiner): CALL_SYMBOL = "<call>" def initial_grammar(self): return copy.deepcopy( {START_SYMBOL: [self.CALL_SYMBOL], self.CALL_SYMBOL: [] }) m = CallGrammarMiner(power_carver) initial_grammar = m.initial_grammar() initial_grammar ###Output _____no_output_____ ###Markdown A Grammar from ArgumentsLet us start by creating a grammar from a list of arguments. The method `mine_arguments_grammar()` creates a grammar for the arguments seen during carving, such as these: ###Code arguments = power_carver.arguments("power") arguments ###Output _____no_output_____ ###Markdown The `mine_arguments_grammar()` method iterates through the variables seen and creates a mapping `variables` of variable names to a set of values seen (as strings, going through `call_value()`). In a second step, it then creates a grammar with a rule for each variable name, expanding into the values seen. ###Code class CallGrammarMiner(CallGrammarMiner): def var_symbol(self, function_name, var, grammar): return new_symbol(grammar, "<" + function_name + "-" + var + ">") def mine_arguments_grammar(self, function_name, arguments, grammar): var_grammar = {} variables = {} for argument_list in arguments: for (var, value) in argument_list: value_string = call_value(value) if self.log: print(var, "=", value_string) if value_string.find("<") >= 0: var_grammar["<langle>"] = ["<"] value_string = value_string.replace("<", "<langle>") if var not in variables: variables[var] = set() variables[var].add(value_string) var_symbols = [] for var in variables: var_symbol = self.var_symbol(function_name, var, grammar) var_symbols.append(var_symbol) var_grammar[var_symbol] = list(variables[var]) return var_grammar, var_symbols m = CallGrammarMiner(power_carver) var_grammar, var_symbols = m.mine_arguments_grammar( "power", arguments, initial_grammar) var_grammar ###Output _____no_output_____ ###Markdown The additional return value `var_symbols` is a list of argument symbols in the call: ###Code var_symbols ###Output _____no_output_____ ###Markdown A Grammar from CallsTo get the grammar for a single function (`mine_function_grammar()`), we add a call to the function: ###Code class CallGrammarMiner(CallGrammarMiner): def function_symbol(self, function_name, grammar): return new_symbol(grammar, "<" + function_name + ">") def mine_function_grammar(self, function_name, grammar): arguments = self.carver.arguments(function_name) if self.log: print(function_name, arguments) var_grammar, var_symbols = self.mine_arguments_grammar( function_name, arguments, grammar) function_grammar = var_grammar function_symbol = self.function_symbol(function_name, grammar) if len(var_symbols) > 0 and var_symbols[0].find("-self") >= 0: # Method call function_grammar[function_symbol] = [ var_symbols[0] + "." + function_name + "(" + ", ".join(var_symbols[1:]) + ")"] else: function_grammar[function_symbol] = [ function_name + "(" + ", ".join(var_symbols) + ")"] if self.log: print(function_symbol, "::=", function_grammar[function_symbol]) return function_grammar, function_symbol m = CallGrammarMiner(power_carver) function_grammar, function_symbol = m.mine_function_grammar( "power", initial_grammar) function_grammar ###Output _____no_output_____ ###Markdown The additionally returned `function_symbol` holds the name of the function call just added: ###Code function_symbol ###Output _____no_output_____ ###Markdown A Grammar from all CallsLet us now repeat the above for all function calls seen during carving. To this end, we simply iterate over all function calls seen: ###Code power_carver.called_functions() class CallGrammarMiner(CallGrammarMiner): def mine_call_grammar(self, function_list=None, qualified=False): grammar = self.initial_grammar() fn_list = function_list if function_list is None: fn_list = self.carver.called_functions(qualified=qualified) for function_name in fn_list: if function_list is None and (function_name.startswith("_") or function_name.startswith("<")): continue # Internal function # Ignore errors with mined functions try: function_grammar, function_symbol = self.mine_function_grammar( function_name, grammar) except: if function_list is not None: raise if function_symbol not in grammar[self.CALL_SYMBOL]: grammar[self.CALL_SYMBOL].append(function_symbol) grammar.update(function_grammar) assert is_valid_grammar(grammar) return grammar ###Output _____no_output_____ ###Markdown The method `mine_call_grammar()` is the one that clients can and should use – first for mining... ###Code m = CallGrammarMiner(power_carver) power_grammar = m.mine_call_grammar() power_grammar ###Output _____no_output_____ ###Markdown ...and then for fuzzing: ###Code power_fuzzer = GrammarCoverageFuzzer(power_grammar) [power_fuzzer.fuzz() for i in range(5)] ###Output _____no_output_____ ###Markdown With this, we have successfully extracted a grammar from a recorded execution; in contrast to "simple" carving, our grammar allows us to _recombine_ arguments and thus to fuzz at the API level. Fuzzing Web FunctionsLet us now apply our grammar miner on a larger API – the `urlparse()` function we already encountered during carving. ###Code from Carver import webbrowser with CallCarver() as webbrowser_carver: webbrowser("https://www.fuzzingbook.org") webbrowser("http://www.example.com") ###Output _____no_output_____ ###Markdown We can mine a grammar from the calls encountered: ###Code m = CallGrammarMiner(webbrowser_carver) webbrowser_grammar = m.mine_call_grammar() ###Output _____no_output_____ ###Markdown This is a rather large grammar: ###Code print(webbrowser_grammar['<call>']) ###Output ['<webbrowser>', '<urlopen>', '<open>', '<full_url>', '<unwrap>', '<splittag>', '<splittype>', '<splithost>', '<unquote>', '<data>', '<request_host>', '<urlparse>', '<urlsplit>', '<do_request_>', '<has_proxy>', '<has_header>', '<add_unredirected_header>', '<https_open>', '<do_open>', '<get_method>', '<create_connection>', '<getaddrinfo>', '<encode>', '<match_hostname>', '<ip_address>', '<readable>', '<begin>', '<parsestr>', '<parse>', '<push>', '<pushlines>', '<readline>', '<header_source_parse>', '<set_raw>', '<get_content_type>', '<get>', '<header_fetch_parse>', '<get_content_maintype>', '<set_payload>', '<get_full_url>', '<http_response>', '<info>', '<getcode>', '<http_open>'] ###Markdown Here's the rule for the `urlsplit()` function: ###Code webbrowser_grammar["<urlsplit>"] ###Output _____no_output_____ ###Markdown Here are the arguments. Note that although we only passed `http://www.fuzzingbook.org` as a parameter, we also see the `https:` variant. That is because opening the `http:` URL automatically redirects to the `https:` URL, which is then also processed by `urlsplit()`. ###Code webbrowser_grammar["<urlsplit-url>"] ###Output _____no_output_____ ###Markdown There also is some variation in the `scheme` argument: ###Code webbrowser_grammar["<urlsplit-scheme>"] ###Output _____no_output_____ ###Markdown If we now apply a fuzzer on these rules, we systematically cover all variations of arguments seen, including, of course, combinations not seen during carving. Again, we are fuzzing at the API level here. ###Code urlsplit_fuzzer = GrammarCoverageFuzzer( webbrowser_grammar, start_symbol="<urlsplit>") for i in range(5): print(urlsplit_fuzzer.fuzz()) ###Output urlsplit('http://www.example.com', '', True) urlsplit('https://www.fuzzingbook.org', '', True) urlsplit('https://www.fuzzingbook.org', '', True) urlsplit('https://www.fuzzingbook.org', '', True) urlsplit('https://www.fuzzingbook.org', '', True) ###Markdown Just as seen with carving, running tests at the API level is orders of magnitude faster than executing system tests. Hence, this calls for means to fuzz at the method level: ###Code from urllib.parse import urlsplit from Timer import Timer with Timer() as urlsplit_timer: urlsplit('http://www.fuzzingbook.org/', 'http', True) urlsplit_timer.elapsed_time() with Timer() as webbrowser_timer: webbrowser("http://www.fuzzingbook.org") webbrowser_timer.elapsed_time() webbrowser_timer.elapsed_time() / urlsplit_timer.elapsed_time() ###Output _____no_output_____ ###Markdown But then again, the caveats encountered during carving apply, notably the requirement to recreate the original function environment. If we also alter or recombine arguments, we get the additional risk of _violating an implicit precondition_ – that is, invoking a function with arguments the function was never designed for. Such _false alarms_, resulting from incorrect invocations rather than incorrect implementations, must then be identified (typically manually) and wed out (for instance, by altering or constraining the grammar). The huge speed gains at the API level, however, may well justify this additional investment. Lessons Learned* _Carving_ allows for effective replay of function calls recorded during a system test* A function call can be _orders of magnitude faster_ than a system invocation.* _Serialization_ allows to create persistent representations of complex objects.* Functions that heavily interact with their environment and/or access external resources are difficult to carve.* From carved calls, one can produce API grammars that arbitrarily combine carved arguments. Next StepsIn the next chapter, we will discuss [how to reduce failure-inducing inputs](Reducer.ipynb). BackgroundCarving was invented by Elbaum et al. \cite{Elbaum2006} and originally implemented for Java. In this chapter, we follow several of their design choices (including recording and serializing method arguments only).The combination of carving and fuzzing at the API level was first conducted by Alexander Kampmann in his PhD work. Exercises Exercise 1: Carving for Regression TestingSo far, during carving, we only have looked into reproducing _calls_, but not into actually checking the _results_ of these calls. This is important for _regression testing_ – i.e. checking whether a change to code does not impede existing functionality. We can build this by recording not only _calls_, but also _return values_ – and then later compare whether the same calls result in the same values. This may not work on all occasions; values that depend on time, randomness, or other external factors may be different. Still, for functionality that abstracts from these details, checking that nothing has changed is an important part of testing. Our aim is to design a class `ResultCarver` that extends `CallCarver` by recording both calls and return values.In a first step, create a `traceit()` method that also tracks return values by extending the `traceit()` method. The `traceit()` event type is `"return"` and the `arg` parameter is the returned value. Here is a prototype that only prints out the returned values: ###Code class ResultCarver(CallCarver): def traceit(self, frame, event, arg): if event == "return": if self._log: print("Result:", arg) super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) ###Output my_sqrt(x=2) Result: 1.414213562373095 __exit__(self=<__main__.ResultCarver object at 0x10f2243c8>, exc_type=None, exc_value=None, tb=None) ###Markdown Part 1: Store function resultsExtend the above code such that results are _stored_ in a way that associates them with the currently returning function (or method). To this end, you need to keep track of the _current stack of called functions_. **Solution.** Here's a solution, building on the above: ###Code class ResultCarver(CallCarver): def reset(self): super().reset() self._call_stack = [] self._results = {} def add_result(self, function_name, arguments, result): key = simple_call_string(function_name, arguments) self._results[key] = result def traceit(self, frame, event, arg): if event == "call": code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) self._call_stack.append( (function_name, qualified_name, get_arguments(frame))) if event == "return": result = arg (function_name, qualified_name, arguments) = self._call_stack.pop() self.add_result(function_name, arguments, result) if function_name != qualified_name: self.add_result(qualified_name, arguments, result) if self._log: print( simple_call_string( function_name, arguments), "=", result) # Keep on processing current calls super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) result_carver._results ###Output my_sqrt(x=2) my_sqrt(x=2) = 1.414213562373095 __exit__(self=<__main__.ResultCarver object at 0x191f9a77f0>, exc_type=None, exc_value=None, tb=None) ###Markdown Part 2: Access results Give it a method `result()` that returns the value recorded for that particular function name and result:```pythonclass ResultCarver(CallCarver): def result(self, function_name, argument): """Returns the result recorded for function_name(argument"""``` **Solution.** This is mostly done in the code for part 1: ###Code class ResultCarver(ResultCarver): def result(self, function_name, argument): key = simple_call_string(function_name, arguments) return self._results[key] ###Output _____no_output_____ ###Markdown Part 3: Produce assertionsFor the functions called during `webbrowser()` execution, create a set of _assertions_ that check whether the result returned is still the same. Test this for `urllib.parse.urlparse()` and `urllib.parse.urlsplit()`. **Solution.** Not too hard now: ###Code with ResultCarver() as webbrowser_result_carver: webbrowser("http://www.example.com") for function_name in ["urllib.parse.urlparse", "urllib.parse.urlsplit"]: for arguments in webbrowser_result_carver.arguments(function_name): try: call = call_string(function_name, arguments) result = webbrowser_result_carver.result(function_name, arguments) print("assert", call, "==", call_value(result)) except Exception: continue ###Output assert urllib.parse.urlparse(url='http://www.example.com', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='', query='', fragment='') ###Markdown We can run these assertions: ###Code from urllib.parse import SplitResult, ParseResult, urlparse, urlsplit assert urlparse( url='http://www.example.com', scheme='', allow_fragments=True) == ParseResult( scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urlsplit( url='http://www.example.com', scheme='', allow_fragments=True) == SplitResult( scheme='http', netloc='www.example.com', path='', query='', fragment='') ###Output _____no_output_____ ###Markdown Carving Unit TestsSo far, we have always generated _system input_, i.e. data that the program as a whole obtains via its input channels. If we are interested in testing only a small set of functions, having to go through the system can be very inefficient. This chapter introduces a technique known as _carving_, which, given a system test, automatically extracts a set of _unit tests_ that replicate the calls seen during the unit test. The key idea is to _record_ such calls such that we can _replay_ them later – as a whole or selectively. **Prerequisites*** Carving makes use of dynamic traces of function calls and variables, as introduced in the [chapter on configuration fuzzing](ConfigurationFuzzer.ipynb). System Tests vs Unit TestsRemember the URL grammar introduced for [grammar fuzzing](Grammars.ipynb)? With such a grammar, we can happily test a Web browser again and again, checking how it reacts to arbitrary page requests.Let us define a very simple "web browser" that goes and downloads the content given by the URL. ###Code import urllib.parse import urllib.request import fuzzingbook_utils def webbrowser(url): """Download the http/https resource given by the URL""" response = urllib.request.urlopen(url) if response.getcode() == 200: contents = response.read() return contents.decode("utf8") ###Output _____no_output_____ ###Markdown Let us apply this on [fuzzingboook.org](https://www.fuzzingbook.org/) and measure the time, using the [Timer class](Timer.ipynb): ###Code from Timer import Timer with Timer() as webbrowser_timer: fuzzingbook_contents = webbrowser( "http://www.fuzzingbook.org/html/Fuzzer.html") print("Downloaded %d bytes in %.2f seconds" % (len(fuzzingbook_contents), webbrowser_timer.elapsed_time())) fuzzingbook_contents[:100] ###Output _____no_output_____ ###Markdown A full webbrowser, of course, would also render the HTML content. We can achieve this using these commands (but we don't, as we do not want to replicate the entire Web page here):```pythonfrom IPython.core.display import HTML, displayHTML(fuzzingbook_contents)``` Having to start a whole browser (or having it render a Web page) again and again means lots of overhead, though – in particular if we want to test only a subset of its functionality. In particular, after a change in the code, we would prefer to test only the subset of functions that is affected by the change, rather than running the well-tested functions again and again. Let us assume we change the function that takes care of parsing the given URL and decomposing it into the individual elements – the scheme ("http"), the network location (`"www.fuzzingbook.com"`), or the path (`"/html/Fuzzer.html"`). This function is named `urlparse()`: ###Code from urllib.parse import urlparse urlparse('https://www.fuzzingbook.com/html/Carver.html') ###Output _____no_output_____ ###Markdown You see how the individual elements of the URL – the _scheme_ (`"http"`), the _network location_ (`"www.fuzzingbook.com"`), or the path (`"//html/Carver.html"`) are all properly identified. Other elements (like `params`, `query`, or `fragment`) are empty, because they were not part of our input. The interesting thing is that executing only `urlparse()` is orders of magnitude faster than running all of `webbrowser()`. Let us measure the factor: ###Code runs = 1000 with Timer() as urlparse_timer: for i in range(runs): urlparse('https://www.fuzzingbook.com/html/Carver.html') avg_urlparse_time = urlparse_timer.elapsed_time() / 1000 avg_urlparse_time ###Output _____no_output_____ ###Markdown Compare this to the time required by the webbrowser ###Code webbrowser_timer.elapsed_time() ###Output _____no_output_____ ###Markdown The difference in time is huge: ###Code webbrowser_timer.elapsed_time() / avg_urlparse_time ###Output _____no_output_____ ###Markdown Hence, in the time it takes to run `webbrowser()` once, we can have _tens of thousands_ of executions of `urlparse()` – and this does not even take into account the time it takes the browser to render the downloaded HTML, to run the included scripts, and whatever else happens when a Web page is loaded. Hence, strategies that allow us to test at the _unit_ level are very promising as they can save lots of overhead. Carving Unit TestsTesting methods and functions at the unit level requires a very good understanding of the individual units to be tested as well as their interplay with other units. Setting up an appropriate infrastructure and writing unit tests by hand thus is demanding, yet rewarding. There is, however, an interesting alternative to writing unit tests by hand. The technique of _carving_ automatically _converts system tests into unit tests_ by means of recording and replaying function calls:1. During a system test (given or generated), we _record_ all calls into a function, including all arguments and other variables the function reads.2. From these, we synthesize a self-contained _unit test_ that reconstructs the function call with all arguments.3. This unit test can be executed (replayed) at any time with high efficiency.In the remainder of this chapter, let us explore these steps. Recording CallsOur first challenge is to record function calls together with their arguments. (In the interest of simplicity, we restrict ourself to arguments, ignoring any global variables or other non-arguments that are read by the function.) To record calls and arguments, we use the mechanism [we introduced for coverage](Coverage.ipynb): By setting up a tracer function, we track all calls into individual functions, also saving their arguments. Just like `Coverage` objects, we want to use `Carver` objects to be able to be used in conjunction with the `with` statement, such that we can trace a particular code block:```pythonwith Carver() as carver: function_to_be_traced()c = carver.calls()```The initial definition supports this construct: ###Code import sys class Carver(object): def __init__(self, log=False): self._log = log self.reset() def reset(self): self._calls = {} # Start of `with` block def __enter__(self): self.original_trace_function = sys.gettrace() sys.settrace(self.traceit) return self # End of `with` block def __exit__(self, exc_type, exc_value, tb): sys.settrace(self.original_trace_function) ###Output _____no_output_____ ###Markdown The actual work takes place in the `traceit()` method, which records all calls in the `_calls` attribute. First, we define two helper functions: ###Code import inspect def get_qualified_name(code): """Return the fully qualified name of the current function""" name = code.co_name module = inspect.getmodule(code) if module is not None: name = module.__name__ + "." + name return name def get_arguments(frame): """Return call arguments in the given frame""" # When called, all arguments are local variables arguments = [(var, frame.f_locals[var]) for var in frame.f_locals] arguments.reverse() # Want same order as call return arguments class CallCarver(Carver): def add_call(self, function_name, arguments): """Add given call to list of calls""" if function_name not in self._calls: self._calls[function_name] = [] self._calls[function_name].append(arguments) # Tracking function: Record all calls and all args def traceit(self, frame, event, arg): if event != "call": return None code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) arguments = get_arguments(frame) self.add_call(function_name, arguments) if qualified_name != function_name: self.add_call(qualified_name, arguments) if self._log: print(simple_call_string(function_name, arguments)) return None ###Output _____no_output_____ ###Markdown Finally, we need some convenience functions to access the calls: ###Code class CallCarver(CallCarver): def calls(self): """Return a dictionary of all calls traced.""" return self._calls def arguments(self, function_name): """Return a list of all arguments of the given function as (VAR, VALUE) pairs. Raises an exception if the function was not traced.""" return self._calls[function_name] def called_functions(self, qualified=False): """Return all functions called.""" if qualified: return [function_name for function_name in self._calls.keys() if function_name.find('.') >= 0] else: return [function_name for function_name in self._calls.keys() if function_name.find('.') < 0] ###Output _____no_output_____ ###Markdown Recording my_sqrt() Let's try out our new `Carver` class – first on a very simple function: ###Code from Intro_Testing import my_sqrt with CallCarver() as sqrt_carver: my_sqrt(2) my_sqrt(4) ###Output _____no_output_____ ###Markdown We can retrieve all calls seen... ###Code sqrt_carver.calls() sqrt_carver.called_functions() ###Output _____no_output_____ ###Markdown ... as well as the arguments of a particular function: ###Code sqrt_carver.arguments("my_sqrt") ###Output _____no_output_____ ###Markdown We define a convenience function for nicer printing of these lists: ###Code def simple_call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string""" return function_name + "(" + \ ", ".join([var + "=" + repr(value) for (var, value) in argument_list]) + ")" for function_name in sqrt_carver.called_functions(): for argument_list in sqrt_carver.arguments(function_name): print(simple_call_string(function_name, argument_list)) ###Output my_sqrt(x=2) my_sqrt(x=4) __exit__(self=<__main__.CallCarver object at 0x10feb92e8>, exc_type=None, exc_value=None, tb=None) ###Markdown This is a syntax we can directly use to invoke `my_sqrt()` again: ###Code eval("my_sqrt(x=2)") ###Output _____no_output_____ ###Markdown Carving urlparse() What happens if we apply this to `webbrowser()`? ###Code with CallCarver() as webbrowser_carver: webbrowser("http://www.example.com") ###Output _____no_output_____ ###Markdown We see that retrieving a URL from the Web requires quite some functionality: ###Code print(webbrowser_carver.called_functions(qualified=True)) ###Output ['urllib.request.urlopen', 'urllib.request.open', 'urllib.request.__init__', 'urllib.request.full_url', 'urllib.parse.unwrap', 'urllib.parse.splittag', 'urllib.request._parse', 'urllib.parse.splittype', 'urllib.parse.splithost', 'urllib.parse.unquote', 'urllib.request.data', 'urllib.request.request_host', 'urllib.parse.urlparse', 'urllib.parse._coerce_args', 'urllib.parse.urlsplit', 'urllib.parse.clear_cache', 'urllib.parse._splitnetloc', 'urllib.parse._noop', 'urllib.request.do_request_', 'urllib.request.has_proxy', 'urllib.request.has_header', 'urllib.request.add_unredirected_header', 'urllib.request._open', 'urllib.request._call_chain', 'urllib.request.http_open', 'urllib.request.do_open', 'http.client.__init__', 'http.client._get_hostport', 'http.client.set_debuglevel', 'urllib.request.<genexpr>', 'urllib.request.get_method', 'http.client.request', 'http.client._send_request', 'http.client.<genexpr>', 'http.client.putrequest', 'http.client._output', 'http.client.putheader', 'http.client._get_content_length', 'http.client.endheaders', 'http.client._send_output', 'http.client.send', 'http.client.connect', 'socket.create_connection', 'socket.getaddrinfo', 'encodings.idna.encode', 'socket._intenum_converter', 'enum.__call__', 'enum.__new__', 'socket.__init__', 'http.client.getresponse', 'socket.makefile', 'socket.readable', 'http.client.begin', 'http.client._read_status', 'socket.readinto', 'http.client.parse_headers', 'email.parser.__init__', 'email.parser.parsestr', 'email.parser.parse', 'email.feedparser.__init__', 'email.message.__init__', 'email.feedparser.feed', 'email.feedparser.push', 'email.feedparser.pushlines', 'email.feedparser._call_parse', 'email.feedparser._parsegen', 'email.feedparser._new_message', 'email.feedparser.__iter__', 'email.feedparser.__next__', 'email.feedparser.readline', 'email.feedparser._parse_headers', 'email._policybase.header_source_parse', 'email.message.set_raw', 'email.message.get_content_type', 'email.message.get', 'email._policybase.header_fetch_parse', 'email._policybase._sanitize_header', 'email.utils._has_surrogates', 'email.message._splitparam', 'email.message.get_content_maintype', 'email.feedparser.close', 'email.message.set_payload', 'email.feedparser._pop_message', 'http.client._check_close', 'http.client.close', 'socket.close', 'urllib.request.get_full_url', 'urllib.request.http_response', 'http.client.info', 'http.client.getcode', 'http.client.read', 'http.client._safe_read', 'http.client._close_conn', 'socket._decref_socketios', 'socket._real_close'] ###Markdown Among several other functions, we also have a call to `urlparse()`: ###Code urlparse_argument_list = webbrowser_carver.arguments("urllib.parse.urlparse") urlparse_argument_list ###Output _____no_output_____ ###Markdown Again, we can convert this into a well-formatted call: ###Code urlparse_call = simple_call_string("urlparse", urlparse_argument_list[0]) urlparse_call ###Output _____no_output_____ ###Markdown Again, we can re-execute this call: ###Code eval(urlparse_call) ###Output _____no_output_____ ###Markdown We now have successfully carved the call to `urlparse()` out of the `webbrowser()` execution. Replaying Calls Replaying calls in their entirety and in all generality is tricky, as there are several challenges to be addressed. These include:1. We need to be able to _access_ individual functions. If we access a function by name, the name must be in scope. If the name is not visible (for instance, because it is a name internal to the module), we must make it visible.2. Any _resources_ accessed outside of arguments must be recorded and reconstructed for replay as well. This can be difficult if variables refer to external resources such as files or network resources.3. _Complex objects_ must be reconstructed as well.These constraints make carving hard or even impossible if the function to be tested interacts heavily with its environment. To illustrate these issues, consider the `email.parser.parse()` method that is invoked in `webbrowser()`: ###Code email_parse_argument_list = webbrowser_carver.arguments("email.parser.parse") ###Output _____no_output_____ ###Markdown Calls to this method look like this: ###Code email_parse_call = simple_call_string( "email.parser.parse", email_parse_argument_list[0]) email_parse_call ###Output _____no_output_____ ###Markdown We see that `email.parser.parse()` is part of a `email.parser.Parser` object and it gets a `StringIO` object. Both are non-primitive values. How could we possibly reconstruct them? Serializing ObjectsThe answer to the problem of complex objects lies in creating a _persistent_ representation that can be _reconstructed_ at later points in time. This process is known as _serialization_; in Python, it is also known as _pickling_. The `pickle` module provides means to create a serialized representation of an object. Let us apply this on the `email.parser.Parser` object we just found: ###Code import pickle parser_object = email_parse_argument_list[0][0][1] parser_object pickled = pickle.dumps(parser_object) pickled ###Output _____no_output_____ ###Markdown From this string representing the serialized `email.parser.Parser` object, we can recreate the Parser object at any time: ###Code unpickled_parser_object = pickle.loads(pickled) unpickled_parser_object ###Output _____no_output_____ ###Markdown The serialization mechanism allows us to produce a representation for all objects passed as parameters (assuming they can be pickled, that is). We can now extend the `simple_call_string()` function such that it automatically pickles objects. Additionally, we set it up such that if the first parameter is named `self` (i.e., it is a class method), we make it a method of the `self` object. ###Code def call_value(value): value_as_string = repr(value) if value_as_string.find('<') >= 0: # Complex object value_as_string = "pickle.loads(" + repr(pickle.dumps(value)) + ")" return value_as_string def call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string, pickling complex objects""" if len(argument_list) > 0: (first_var, first_value) = argument_list[0] if first_var == "self": # Make this a method call method_name = function_name.split(".")[-1] function_name = call_value(first_value) + "." + method_name argument_list = argument_list[1:] return function_name + "(" + \ ", ".join([var + "=" + call_value(value) for (var, value) in argument_list]) + ")" ###Output _____no_output_____ ###Markdown Let us apply the extended `call_string()` method to create a call for `email.parser.parse()`, including pickled objects: ###Code call = call_string("email.parser.parse", email_parse_argument_list[0]) print(call) ###Output pickle.loads(b'\x80\x03cemail.parser\nParser\nq\x00)\x81q\x01}q\x02(X\x06\x00\x00\x00_classq\x03chttp.client\nHTTPMessage\nq\x04X\x06\x00\x00\x00policyq\x05cemail._policybase\nCompat32\nq\x06)\x81q\x07ub.').parse(fp=pickle.loads(b'\x80\x03c_io\nStringIO\nq\x00)\x81q\x01(XD\x01\x00\x00Cache-Control: max-age=604800\r\nContent-Type: text/html; charset=UTF-8\r\nDate: Mon, 26 Nov 2018 11:22:07 GMT\r\nEtag: "1541025663+ident"\r\nExpires: Mon, 03 Dec 2018 11:22:07 GMT\r\nLast-Modified: Fri, 09 Aug 2013 23:54:35 GMT\r\nServer: ECS (dca/24C1)\r\nVary: Accept-Encoding\r\nX-Cache: HIT\r\nContent-Length: 1270\r\nConnection: close\r\n\r\nq\x02X\x01\x00\x00\x00\nq\x03MD\x01Ntq\x04b.'), headersonly=False) ###Markdown With this call involvimng the pickled object, we can now re-run the original call and obtain a valid result: ###Code eval(call) ###Output _____no_output_____ ###Markdown All CallsSo far, we have seen only one call of `webbrowser()`. How many of the calls within `webbrowser()` can we actually carve and replay? Let us try this out and compute the numbers. ###Code import traceback import enum import socket all_functions = set(webbrowser_carver.called_functions(qualified=True)) call_success = set() run_success = set() exceptions_seen = set() for function_name in webbrowser_carver.called_functions(qualified=True): for argument_list in webbrowser_carver.arguments(function_name): try: call = call_string(function_name, argument_list) call_success.add(function_name) result = eval(call) run_success.add(function_name) except Exception as exc: exceptions_seen.add(repr(exc)) # print("->", call, file=sys.stderr) # traceback.print_exc() # print("", file=sys.stderr) continue print("%d/%d calls (%.2f%%) successfully created and %d/%d calls (%.2f%%) successfully ran" % ( len(call_success), len(all_functions), len( call_success) * 100 / len(all_functions), len(run_success), len(all_functions), len(run_success) * 100 / len(all_functions))) ###Output 83/95 calls (87.37%) successfully created and 47/95 calls (49.47%) successfully ran ###Markdown About half of the calls succeed. Let us take a look into some of the error messages we get: ###Code for i in range(10): print(list(exceptions_seen)[i]) ###Output NameError("name 'SplitResult' is not defined",) SyntaxError('invalid syntax', ('<string>', 1, 16, "urllib.request.<genexpr>(headers={'Host': 'www.example.com', 'User-agent': 'Python-urllib/3.6', 'Connection': 'close'}, .0=pickle.loads(b'\\x80\\x03cbuiltins\\niter\\nq\\x00]q\\x01\\x85q\\x02Rq\\x03.'))")) TypeError('an integer is required (got type object)',) StopIteration() TypeError("_call_chain() got an unexpected keyword argument 'args'",) ResponseNotReady('Idle',) TypeError("putheader() got an unexpected keyword argument 'values'",) AttributeError("'NoneType' object has no attribute 'readline'",) TypeError("can't pickle memoryview objects",) SyntaxError('invalid syntax', ('<string>', 1, 16, "urllib.request.<genexpr>(name='User-agent', val='Python-urllib/3.6', .0=pickle.loads(b'\\x80\\x03cbuiltins\\niter\\nq\\x00]q\\x01\\x85q\\x02Rq\\x03.'))")) ###Markdown We see that:* **A large majority of calls could be converted into call strings.** If this is not the case, this is mostly due to having unserialized objects being passed.* **About half of the calls could be executed.** The error messages for the failing runs are varied; the most frequent being that some internal name is invoked that is not in scope. Our carving mechanism should be taken with a grain of salt: We still do not cover the situation where external variables and values (such as global variables) are being accessed, and the serialization mechanism cannot recreate external resources. Still, if the function of interest falls among those that _can_ be carved and replayed, we can very effectively re-run its calls with their original arguments. Lessons Learned* _Carving_ allows for effective replay of function calls recorded during a system test* A function call can be _orders of magnitude faster_ than a system invocation.* _Serialization_ allows to create persistent representations of complex objects.* Functions that heavily interact with their environment and/or access external resources are difficult to carve. Next StepsThe following chapters make use of the concepts defined here:* In the chapter on [fuzzing APIs](APIFuzzer.ipynb), we discuss how to use carving to _fuzz functions with combinations of carved and newly generated values_. This effectively joins the strengths of carving and fuzzing. BackgroundCarving was invented by Elbaum et al. \cite{Elbaum2006} and originally implemented for Java. In this chapter, we follow several of their design choices (including recording and serializing method arguments only). Exercises Exercise 1: Carving for Regression TestingSo far, during carving, we only have looked into reproducing _calls_, but not into actually checking the _results_ of these calls. This is important for _regression testing_ – i.e. checking whether a change to code does not impede existing functionality. We can build this by recording not only _calls_, but also _return values_ – and then later compare whether the same calls result in the same values. This may not work on all occasions; values that depend on time, randomness, or other external factors may be different. Still, for functionality that abstracts from these details, checking that nothing has changed is an important part of testing. Our aim is to design a class `ResultCarver` that extends `CallCarver` by recording both calls and return values.In a first step, create a `traceit()` method that also tracks return values by extending the `traceit()` method. The `traceit()` event type is `"return"` and the `arg` parameter is the returned value. Here is a prototype that only prints out the returned values: ###Code class ResultCarver(CallCarver): def traceit(self, frame, event, arg): if event == "return": if self._log: print("Result:", arg) super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) ###Output my_sqrt(x=2) Result: 1.414213562373095 __exit__(self=<__main__.ResultCarver object at 0x10feb9d30>, exc_type=None, exc_value=None, tb=None) ###Markdown Part 1: Store function resultsExtend the above code such that results are _stored_ in a way that associates them with the currently returning function (or method). To this end, you need to keep track of the _current stack of called functions_. **Solution.** Here's a solution, building on the above: ###Code class ResultCarver(CallCarver): def reset(self): super().reset() self._call_stack = [] self._results = {} def add_result(self, function_name, arguments, result): key = simple_call_string(function_name, arguments) self._results[key] = result def traceit(self, frame, event, arg): if event == "call": code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) self._call_stack.append( (function_name, qualified_name, get_arguments(frame))) if event == "return": result = arg (function_name, qualified_name, arguments) = self._call_stack.pop() self.add_result(function_name, arguments, result) if function_name != qualified_name: self.add_result(qualified_name, arguments, result) if self._log: print( simple_call_string( function_name, arguments), "=", result) # Keep on processing current calls super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) result_carver._results ###Output my_sqrt(x=2) my_sqrt(x=2) = 1.414213562373095 __exit__(self=<__main__.ResultCarver object at 0x10fee6438>, exc_type=None, exc_value=None, tb=None) ###Markdown Part 2: Access results Give it a method `result()` that returns the value recorded for that particular function name and result:```pythonclass ResultCarver(CallCarver): def result(self, function_name, argument): """Returns the result recorded for function_name(argument"""``` **Solution.** This is mostly done in the code for part 1: ###Code class ResultCarver(ResultCarver): def result(self, function_name, argument): key = simple_call_string(function_name, arguments) return self._results[key] ###Output _____no_output_____ ###Markdown Part 3: Produce assertionsFor the functions called during `webbrowser()` execution, create a set of _assertions_ that check whether the result returned is still the same. Test this for `urllib.parse.urlparse()` and `urllib.parse.urlsplit()`. **Solution.** Not too hard now: ###Code with ResultCarver() as webbrowser_result_carver: webbrowser("http://www.example.com") for function_name in ["urllib.parse.urlparse", "urllib.parse.urlsplit"]: for arguments in webbrowser_result_carver.arguments(function_name): try: call = call_string(function_name, arguments) result = webbrowser_result_carver.result(function_name, arguments) print("assert", call, "==", call_value(result)) except Exception: continue ###Output assert urllib.parse.urlparse(url='http://www.example.com', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='', query='', fragment='') ###Markdown We can run these assertions: ###Code from urllib.parse import SplitResult, ParseResult, urlparse, urlsplit assert urlparse( url='http://www.example.com', scheme='', allow_fragments=True) == ParseResult( scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urlsplit( url='http://www.example.com', scheme='', allow_fragments=True) == SplitResult( scheme='http', netloc='www.example.com', path='', query='', fragment='') ###Output _____no_output_____ ###Markdown The actual work takes place in the `traceit()` method, which records all calls in the `_calls` attribute. First, we define two helper functions: ###Code import inspect def get_qualified_name(code): """Return the fully qualified name of the current function""" name = code.co_name module = inspect.getmodule(code) if module is not None: name = module.__name__ + "." + name return name def get_arguments(frame): """Return call arguments in the given frame""" # When called, all arguments are local variables local_variables = frame.f_locals.copy() arguments = [(var, frame.f_locals[var]) for var in local_variables] arguments.reverse() # Want same order as call return arguments class CallCarver(Carver): def add_call(self, function_name, arguments): """Add given call to list of calls""" if function_name not in self._calls: self._calls[function_name] = [] self._calls[function_name].append(arguments) # Tracking function: Record all calls and all args def traceit(self, frame, event, arg): if event != "call": return None code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) arguments = get_arguments(frame) self.add_call(function_name, arguments) if qualified_name != function_name: self.add_call(qualified_name, arguments) if self._log: print(simple_call_string(function_name, arguments)) return None ###Output _____no_output_____ ###Markdown Finally, we need some convenience functions to access the calls: ###Code class CallCarver(CallCarver): def calls(self): """Return a dictionary of all calls traced.""" return self._calls def arguments(self, function_name): """Return a list of all arguments of the given function as (VAR, VALUE) pairs. Raises an exception if the function was not traced.""" return self._calls[function_name] def called_functions(self, qualified=False): """Return all functions called.""" if qualified: return [function_name for function_name in self._calls.keys() if function_name.find('.') >= 0] else: return [function_name for function_name in self._calls.keys() if function_name.find('.') < 0] ###Output _____no_output_____ ###Markdown Recording my_sqrt() Let's try out our new `Carver` class – first on a very simple function: ###Code from Intro_Testing import my_sqrt with CallCarver() as sqrt_carver: my_sqrt(2) my_sqrt(4) ###Output _____no_output_____ ###Markdown We can retrieve all calls seen... ###Code sqrt_carver.calls() sqrt_carver.called_functions() ###Output _____no_output_____ ###Markdown ... as well as the arguments of a particular function: ###Code sqrt_carver.arguments("my_sqrt") ###Output _____no_output_____ ###Markdown We define a convenience function for nicer printing of these lists: ###Code def simple_call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string""" return function_name + "(" + \ ", ".join([var + "=" + repr(value) for (var, value) in argument_list]) + ")" for function_name in sqrt_carver.called_functions(): for argument_list in sqrt_carver.arguments(function_name): print(simple_call_string(function_name, argument_list)) ###Output my_sqrt(x=2) my_sqrt(x=4) __exit__(tb=None, exc_value=None, exc_type=None, self=<__main__.CallCarver object at 0x1233f0a90>) ###Markdown This is a syntax we can directly use to invoke `my_sqrt()` again: ###Code eval("my_sqrt(x=2)") ###Output _____no_output_____ ###Markdown Carving urlparse() What happens if we apply this to `webbrowser()`? ###Code with CallCarver() as webbrowser_carver: webbrowser("http://www.example.com") ###Output _____no_output_____ ###Markdown We see that retrieving a URL from the Web requires quite some functionality: ###Code function_list = webbrowser_carver.called_functions(qualified=True) len(function_list) print(function_list[:50]) ###Output ['requests.api.get', 'requests.api.request', 'requests.sessions.__init__', 'requests.utils.default_headers', 'requests.utils.default_user_agent', 'requests.structures.__init__', 'collections.abc.update', 'abc.__instancecheck__', 'requests.structures.__setitem__', 'requests.hooks.default_hooks', 'requests.hooks.<dictcomp>', 'requests.cookies.cookiejar_from_dict', 'http.cookiejar.__init__', 'threading.RLock', 'http.cookiejar.__iter__', 'requests.cookies.<listcomp>', 'http.cookiejar.deepvalues', 'http.cookiejar.vals_sorted_by_key', 'requests.adapters.__init__', 'urllib3.util.retry.__init__', 'urllib3.util.retry.<listcomp>', 'requests.adapters.init_poolmanager', 'urllib3.poolmanager.__init__', 'urllib3.request.__init__', 'urllib3._collections.__init__', 'requests.sessions.mount', 'requests.sessions.<listcomp>', 'requests.sessions.__enter__', 'requests.sessions.request', 'requests.models.__init__', 'requests.sessions.prepare_request', 'requests.cookies.merge_cookies', 'requests.cookies.update', 'requests.utils.get_netrc_auth', 'collections.abc.get', 'os.__getitem__', 'os.encode', 'requests.utils.<genexpr>', 'posixpath.expanduser', 'posixpath._get_sep', 'collections.abc.__contains__', 'os.decode', 'genericpath.exists', 'urllib.parse.urlparse', 'urllib.parse._coerce_args', 'urllib.parse.urlsplit', 'urllib.parse._splitnetloc', 'urllib.parse._checknetloc', 'urllib.parse._noop', 'netrc.__init__'] ###Markdown Among several other functions, we also have a call to `urlparse()`: ###Code urlparse_argument_list = webbrowser_carver.arguments("urllib.parse.urlparse") urlparse_argument_list ###Output _____no_output_____ ###Markdown Again, we can convert this into a well-formatted call: ###Code urlparse_call = simple_call_string("urlparse", urlparse_argument_list[0]) urlparse_call ###Output _____no_output_____ ###Markdown Again, we can re-execute this call: ###Code eval(urlparse_call) ###Output _____no_output_____ ###Markdown We now have successfully carved the call to `urlparse()` out of the `webbrowser()` execution. Replaying Calls Replaying calls in their entirety and in all generality is tricky, as there are several challenges to be addressed. These include:1. We need to be able to _access_ individual functions. If we access a function by name, the name must be in scope. If the name is not visible (for instance, because it is a name internal to the module), we must make it visible.2. Any _resources_ accessed outside of arguments must be recorded and reconstructed for replay as well. This can be difficult if variables refer to external resources such as files or network resources.3. _Complex objects_ must be reconstructed as well. These constraints make carving hard or even impossible if the function to be tested interacts heavily with its environment. To illustrate these issues, consider the `email.parser.parse()` method that is invoked in `webbrowser()`: ###Code email_parse_argument_list = webbrowser_carver.arguments("email.parser.parse") ###Output _____no_output_____ ###Markdown Calls to this method look like this: ###Code email_parse_call = simple_call_string( "email.parser.Parser.parse", email_parse_argument_list[0]) email_parse_call ###Output _____no_output_____ ###Markdown We see that `email.parser.Parser.parse()` is part of a `email.parser.Parser` object (`self`) and it gets a `StringIO` object (`fp`). Both are non-primitive values. How could we possibly reconstruct them? Serializing ObjectsThe answer to the problem of complex objects lies in creating a _persistent_ representation that can be _reconstructed_ at later points in time. This process is known as _serialization_; in Python, it is also known as _pickling_. The `pickle` module provides means to create a serialized representation of an object. Let us apply this on the `email.parser.Parser` object we just found: ###Code import pickle email_parse_argument_list parser_object = email_parse_argument_list[0][2][1] parser_object pickled = pickle.dumps(parser_object) pickled ###Output _____no_output_____ ###Markdown From this string representing the serialized `email.parser.Parser` object, we can recreate the Parser object at any time: ###Code unpickled_parser_object = pickle.loads(pickled) unpickled_parser_object ###Output _____no_output_____ ###Markdown The serialization mechanism allows us to produce a representation for all objects passed as parameters (assuming they can be pickled, that is). We can now extend the `simple_call_string()` function such that it automatically pickles objects. Additionally, we set it up such that if the first parameter is named `self` (i.e., it is a class method), we make it a method of the `self` object. ###Code def call_value(value): value_as_string = repr(value) if value_as_string.find('<') >= 0: # Complex object value_as_string = "pickle.loads(" + repr(pickle.dumps(value)) + ")" return value_as_string def call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string, pickling complex objects""" if len(argument_list) > 0: (first_var, first_value) = argument_list[0] if first_var == "self": # Make this a method call method_name = function_name.split(".")[-1] function_name = call_value(first_value) + "." + method_name argument_list = argument_list[1:] return function_name + "(" + \ ", ".join([var + "=" + call_value(value) for (var, value) in argument_list]) + ")" ###Output _____no_output_____ ###Markdown Let us apply the extended `call_string()` method to create a call for `email.parser.parse()`, including pickled objects: ###Code call = call_string("email.parser.Parser.parse", email_parse_argument_list[0]) print(call) ###Output email.parser.Parser.parse(headersonly=False, fp=pickle.loads(b'\x80\x04\x95\x8e\x01\x00\x00\x00\x00\x00\x00\x8c\x03_io\x94\x8c\x08StringIO\x94\x93\x94)\x81\x94(Xe\x01\x00\x00Content-Encoding: gzip\r\nAccept-Ranges: bytes\r\nAge: 250975\r\nCache-Control: max-age=604800\r\nContent-Type: text/html; charset=UTF-8\r\nDate: Sun, 17 Oct 2021 11:20:10 GMT\r\nEtag: "3147526947"\r\nExpires: Sun, 24 Oct 2021 11:20:10 GMT\r\nLast-Modified: Thu, 17 Oct 2019 07:18:26 GMT\r\nServer: ECS (dcb/7FA5)\r\nVary: Accept-Encoding\r\nX-Cache: HIT\r\nContent-Length: 648\r\n\r\n\x94\x8c\x01\n\x94Me\x01Nt\x94b.'), self=pickle.loads(b'\x80\x04\x95w\x00\x00\x00\x00\x00\x00\x00\x8c\x0cemail.parser\x94\x8c\x06Parser\x94\x93\x94)\x81\x94}\x94(\x8c\x06_class\x94\x8c\x0bhttp.client\x94\x8c\x0bHTTPMessage\x94\x93\x94\x8c\x06policy\x94\x8c\x11email._policybase\x94\x8c\x08Compat32\x94\x93\x94)\x81\x94ub.')) ###Markdown With this call involving the pickled object, we can now re-run the original call and obtain a valid result: ###Code import email eval(call) ###Output _____no_output_____ ###Markdown All CallsSo far, we have seen only one call of `webbrowser()`. How many of the calls within `webbrowser()` can we actually carve and replay? Let us try this out and compute the numbers. ###Code import traceback import enum import socket all_functions = set(webbrowser_carver.called_functions(qualified=True)) call_success = set() run_success = set() exceptions_seen = set() for function_name in webbrowser_carver.called_functions(qualified=True): for argument_list in webbrowser_carver.arguments(function_name): try: call = call_string(function_name, argument_list) call_success.add(function_name) result = eval(call) run_success.add(function_name) except Exception as exc: exceptions_seen.add(repr(exc)) # print("->", call, file=sys.stderr) # traceback.print_exc() # print("", file=sys.stderr) continue print("%d/%d calls (%.2f%%) successfully created and %d/%d calls (%.2f%%) successfully ran" % ( len(call_success), len(all_functions), len( call_success) * 100 / len(all_functions), len(run_success), len(all_functions), len(run_success) * 100 / len(all_functions))) ###Output 264/325 calls (81.23%) successfully created and 54/325 calls (16.62%) successfully ran ###Markdown About a quarter of the calls succeed. Let us take a look into some of the error messages we get: ###Code for i in range(10): print(list(exceptions_seen)[i]) ###Output AttributeError("module 'email.parser' has no attribute 'parsestr'") NameError("name 'SplitResult' is not defined") SyntaxError('invalid syntax', ('<string>', 1, 21, "requests.structures.<genexpr>(mappedvalue='keep-alive', casedkey='Connection', .0=pickle.loads(b'\\x80\\x04\\x95\\x1b\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x8c\\x08builtins\\x94\\x8c\\x04iter\\x94\\x93\\x94]\\x94\\x85\\x94R\\x94.'))")) SyntaxError('invalid syntax', ('<string>', 1, 18, "urllib3.response.<genexpr>(.0=pickle.loads(b'\\x80\\x04\\x95\\x1b\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x8c\\x08builtins\\x94\\x8c\\x04iter\\x94\\x93\\x94]\\x94\\x85\\x94R\\x94.'))")) SyntaxError('invalid syntax', ('<string>', 1, 19, "requests.sessions.<listcomp>(prefix='https://', .0=pickle.loads(b'\\x80\\x04\\x95\\x1b\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x8c\\x08builtins\\x94\\x8c\\x04iter\\x94\\x93\\x94]\\x94\\x85\\x94R\\x94.'))")) SyntaxError('invalid syntax', ('<string>', 1, 21, "urllib3.poolmanager.<lambda>(p=pickle.loads(b'\\x80\\x04\\x95\\x02\\x03\\x00\\x00\\x00\\x00\\x00\\x00\\x8c\\x16urllib3.connectionpool\\x94\\x8c\\x12HTTPConnectionPool\\x94\\x93\\x94)\\x81\\x94}\\x94(\\x8c\\x04host\\x94\\x8c\\x0fwww.example.com\\x94\\x8c\\x0b_proxy_host\\x94\\x8c\\x0fwww.example.com\\x94\\x8c\\x04port\\x94KP\\x8c\\x07headers\\x94}\\x94\\x8c\\x06strict\\x94\\x88\\x8c\\x07timeout\\x94\\x8c\\x14urllib3.util.timeout\\x94\\x8c\\x07Timeout\\x94\\x93\\x94)\\x81\\x94}\\x94(\\x8c\\x08_connect\\x94\\x8c\\x08builtins\\x94\\x8c\\x06object\\x94\\x93\\x94)\\x81\\x94\\x8c\\x05_read\\x94h\\x17\\x8c\\x05total\\x94N\\x8c\\x0e_start_connect\\x94Nub\\x8c\\x07retries\\x94\\x8c\\x12urllib3.util.retry\\x94\\x8c\\x05Retry\\x94\\x93\\x94)\\x81\\x94}\\x94(h\\x19K\\x03\\x8c\\x07connect\\x94N\\x8c\\x04read\\x94N\\x8c\\x06status\\x94N\\x8c\\x05other\\x94N\\x8c\\x08redirect\\x94N\\x8c\\x10status_forcelist\\x94\\x8f\\x94\\x8c\\x0fallowed_methods\\x94(\\x8c\\x04HEAD\\x94\\x8c\\x03GET\\x94\\x8c\\x07OPTIONS\\x94\\x8c\\x06DELETE\\x94\\x8c\\x03PUT\\x94\\x8c\\x05TRACE\\x94\\x91\\x94\\x8c\\x0ebackoff_factor\\x94K\\x00\\x8c\\x11raise_on_redirect\\x94\\x88\\x8c\\x0fraise_on_status\\x94\\x88\\x8c\\x07history\\x94)\\x8c\\x1arespect_retry_after_header\\x94\\x88\\x8c\\x1aremove_headers_on_redirect\\x94(\\x8c\\rauthorization\\x94\\x91\\x94ub\\x8c\\x04pool\\x94N\\x8c\\x05block\\x94\\x89\\x8c\\x05proxy\\x94N\\x8c\\rproxy_headers\\x94}\\x94\\x8c\\x0cproxy_config\\x94N\\x8c\\x0fnum_connections\\x94K\\x01\\x8c\\x0cnum_requests\\x94K\\x01\\x8c\\x07conn_kw\\x94}\\x94\\x8c\\tcert_reqs\\x94\\x8c\\tCERT_NONE\\x94\\x8c\\x08ca_certs\\x94N\\x8c\\x0bca_cert_dir\\x94Nub.'))")) NameError("name 'ItemsView' is not defined") SyntaxError('invalid syntax', ('<string>', 1, 20, "urllib3.connection.<genexpr>(v='*/*', .0=pickle.loads(b'\\x80\\x04\\x95\\x1a\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x8c\\x08builtins\\x94\\x8c\\x04iter\\x94\\x93\\x94)\\x85\\x94R\\x94.'))")) SyntaxError('invalid syntax', ('<string>', 1, 21, "requests.structures.<genexpr>(mappedvalue='gzip, deflate', casedkey='Accept-Encoding', .0=pickle.loads(b'\\x80\\x04\\x95\\x1b\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x8c\\x08builtins\\x94\\x8c\\x04iter\\x94\\x93\\x94]\\x94\\x85\\x94R\\x94.'))")) NameError("name 're' is not defined") ###Markdown We see that:* **A large majority of calls could be converted into call strings.** If this is not the case, this is mostly due to having unserialized objects being passed.* **About a quarter of the calls could be executed.** The error messages for the failing runs are varied; the most frequent being that some internal name is invoked that is not in scope. Our carving mechanism should be taken with a grain of salt: We still do not cover the situation where external variables and values (such as global variables) are being accessed, and the serialization mechanism cannot recreate external resources. Still, if the function of interest falls among those that _can_ be carved and replayed, we can very effectively re-run its calls with their original arguments. Mining API Grammars from Carved CallsSo far, we have used carved calls to replay exactly the same invocations as originally encountered. However, we can also _mutate_ carved calls to effectively fuzz APIs with previously recorded arguments.The general idea is as follows:1. First, we record all calls of a specific function from a given execution of the program.2. Second, we create a grammar that incorporates all these calls, with separate rules for each argument and alternatives for each value found; this allows us to produce calls that arbitrarily _recombine_ these arguments.Let us explore these steps in the following sections. From Calls to GrammarsLet us start with an example. The `power(x, y)` function returns $x^y$; it is but a wrapper around the equivalent `math.pow()` function. (Since `power()` is defined in Python, we can trace it – in contrast to `math.pow()`, which is implemented in C.) ###Code import math def power(x, y): return math.pow(x, y) ###Output _____no_output_____ ###Markdown Let us invoke `power()` while recording its arguments: ###Code with CallCarver() as power_carver: z = power(1, 2) z = power(3, 4) power_carver.arguments("power") ###Output _____no_output_____ ###Markdown From this list of recorded arguments, we could now create a grammar for the `power()` call, with `x` and `y` expanding into the values seen: ###Code from Grammars import START_SYMBOL, is_valid_grammar, new_symbol, extend_grammar POWER_GRAMMAR = { "<start>": ["power(<x>, <y>)"], "<x>": ["1", "3"], "<y>": ["2", "4"] } assert is_valid_grammar(POWER_GRAMMAR) ###Output _____no_output_____ ###Markdown When fuzzing with this grammar, we then get arbitrary combinations of `x` and `y`; aiming for coverage will ensure that all values are actually tested at least once: ###Code from GrammarCoverageFuzzer import GrammarCoverageFuzzer power_fuzzer = GrammarCoverageFuzzer(POWER_GRAMMAR) [power_fuzzer.fuzz() for i in range(5)] ###Output _____no_output_____ ###Markdown What we need is a method to automatically convert the arguments as seen in `power_carver` to the grammar as seen in `POWER_GRAMMAR`. This is what we define in the next section. A Grammar Miner for CallsWe introduce a class `CallGrammarMiner`, which, given a `Carver`, automatically produces a grammar from the calls seen. To initialize, we pass the carver object: ###Code class CallGrammarMiner(object): def __init__(self, carver, log=False): self.carver = carver self.log = log ###Output _____no_output_____ ###Markdown Initial GrammarThe initial grammar produces a single call. The possible `` expansions are to be constructed later: ###Code import copy class CallGrammarMiner(CallGrammarMiner): CALL_SYMBOL = "<call>" def initial_grammar(self): return extend_grammar( {START_SYMBOL: [self.CALL_SYMBOL], self.CALL_SYMBOL: [] }) m = CallGrammarMiner(power_carver) initial_grammar = m.initial_grammar() initial_grammar ###Output _____no_output_____ ###Markdown A Grammar from ArgumentsLet us start by creating a grammar from a list of arguments. The method `mine_arguments_grammar()` creates a grammar for the arguments seen during carving, such as these: ###Code arguments = power_carver.arguments("power") arguments ###Output _____no_output_____ ###Markdown The `mine_arguments_grammar()` method iterates through the variables seen and creates a mapping `variables` of variable names to a set of values seen (as strings, going through `call_value()`). In a second step, it then creates a grammar with a rule for each variable name, expanding into the values seen. ###Code class CallGrammarMiner(CallGrammarMiner): def var_symbol(self, function_name, var, grammar): return new_symbol(grammar, "<" + function_name + "-" + var + ">") def mine_arguments_grammar(self, function_name, arguments, grammar): var_grammar = {} variables = {} for argument_list in arguments: for (var, value) in argument_list: value_string = call_value(value) if self.log: print(var, "=", value_string) if value_string.find("<") >= 0: var_grammar["<langle>"] = ["<"] value_string = value_string.replace("<", "<langle>") if var not in variables: variables[var] = set() variables[var].add(value_string) var_symbols = [] for var in variables: var_symbol = self.var_symbol(function_name, var, grammar) var_symbols.append(var_symbol) var_grammar[var_symbol] = list(variables[var]) return var_grammar, var_symbols m = CallGrammarMiner(power_carver) var_grammar, var_symbols = m.mine_arguments_grammar( "power", arguments, initial_grammar) var_grammar ###Output _____no_output_____ ###Markdown The additional return value `var_symbols` is a list of argument symbols in the call: ###Code var_symbols ###Output _____no_output_____ ###Markdown A Grammar from CallsTo get the grammar for a single function (`mine_function_grammar()`), we add a call to the function: ###Code class CallGrammarMiner(CallGrammarMiner): def function_symbol(self, function_name, grammar): return new_symbol(grammar, "<" + function_name + ">") def mine_function_grammar(self, function_name, grammar): arguments = self.carver.arguments(function_name) if self.log: print(function_name, arguments) var_grammar, var_symbols = self.mine_arguments_grammar( function_name, arguments, grammar) function_grammar = var_grammar function_symbol = self.function_symbol(function_name, grammar) if len(var_symbols) > 0 and var_symbols[0].find("-self") >= 0: # Method call function_grammar[function_symbol] = [ var_symbols[0] + "." + function_name + "(" + ", ".join(var_symbols[1:]) + ")"] else: function_grammar[function_symbol] = [ function_name + "(" + ", ".join(var_symbols) + ")"] if self.log: print(function_symbol, "::=", function_grammar[function_symbol]) return function_grammar, function_symbol m = CallGrammarMiner(power_carver) function_grammar, function_symbol = m.mine_function_grammar( "power", initial_grammar) function_grammar ###Output _____no_output_____ ###Markdown The additionally returned `function_symbol` holds the name of the function call just added: ###Code function_symbol ###Output _____no_output_____ ###Markdown A Grammar from all CallsLet us now repeat the above for all function calls seen during carving. To this end, we simply iterate over all function calls seen: ###Code power_carver.called_functions() class CallGrammarMiner(CallGrammarMiner): def mine_call_grammar(self, function_list=None, qualified=False): grammar = self.initial_grammar() fn_list = function_list if function_list is None: fn_list = self.carver.called_functions(qualified=qualified) for function_name in fn_list: if function_list is None and (function_name.startswith("_") or function_name.startswith("<")): continue # Internal function # Ignore errors with mined functions try: function_grammar, function_symbol = self.mine_function_grammar( function_name, grammar) except: if function_list is not None: raise if function_symbol not in grammar[self.CALL_SYMBOL]: grammar[self.CALL_SYMBOL].append(function_symbol) grammar.update(function_grammar) assert is_valid_grammar(grammar) return grammar ###Output _____no_output_____ ###Markdown The method `mine_call_grammar()` is the one that clients can and should use – first for mining... ###Code m = CallGrammarMiner(power_carver) power_grammar = m.mine_call_grammar() power_grammar ###Output _____no_output_____ ###Markdown ...and then for fuzzing: ###Code power_fuzzer = GrammarCoverageFuzzer(power_grammar) [power_fuzzer.fuzz() for i in range(5)] ###Output _____no_output_____ ###Markdown With this, we have successfully extracted a grammar from a recorded execution; in contrast to "simple" carving, our grammar allows us to _recombine_ arguments and thus to fuzz at the API level. Fuzzing Web FunctionsLet us now apply our grammar miner on a larger API – the `urlparse()` function we already encountered during carving. ###Code with CallCarver() as webbrowser_carver: webbrowser("https://www.fuzzingbook.org") webbrowser("http://www.example.com") ###Output _____no_output_____ ###Markdown We can mine a grammar from the calls encountered: ###Code m = CallGrammarMiner(webbrowser_carver) webbrowser_grammar = m.mine_call_grammar() ###Output _____no_output_____ ###Markdown This is a rather large grammar: ###Code call_list = webbrowser_grammar['<call>'] len(call_list) print(call_list[:20]) ###Output ['<webbrowser>', '<default_headers>', '<default_user_agent>', '<update>', '<default_hooks>', '<cookiejar_from_dict>', '<RLock>', '<deepvalues>', '<vals_sorted_by_key>', '<init_poolmanager>', '<mount>', '<prepare_request>', '<merge_cookies>', '<get_netrc_auth>', '<encode>', '<expanduser>', '<decode>', '<exists>', '<urlparse>', '<urlsplit>'] ###Markdown Here's the rule for the `urlsplit()` function: ###Code webbrowser_grammar["<urlsplit>"] ###Output _____no_output_____ ###Markdown Here are the arguments. Note that although we only passed `http://www.fuzzingbook.org` as a parameter, we also see the `https:` variant. That is because opening the `http:` URL automatically redirects to the `https:` URL, which is then also processed by `urlsplit()`. ###Code webbrowser_grammar["<urlsplit-url>"] ###Output _____no_output_____ ###Markdown There also is some variation in the `scheme` argument: ###Code webbrowser_grammar["<urlsplit-scheme>"] ###Output _____no_output_____ ###Markdown If we now apply a fuzzer on these rules, we systematically cover all variations of arguments seen, including, of course, combinations not seen during carving. Again, we are fuzzing at the API level here. ###Code urlsplit_fuzzer = GrammarCoverageFuzzer( webbrowser_grammar, start_symbol="<urlsplit>") for i in range(5): print(urlsplit_fuzzer.fuzz()) ###Output urlsplit(True, '', 'http://www.example.com') urlsplit(True, '', 'http://www.example.com/') urlsplit(True, '', 'https://www.fuzzingbook.org') urlsplit(True, '', 'https://www.fuzzingbook.org/') urlsplit(True, '', 'https://www.fuzzingbook.org') ###Markdown Just as seen with carving, running tests at the API level is orders of magnitude faster than executing system tests. Hence, this calls for means to fuzz at the method level: ###Code from urllib.parse import urlsplit from Timer import Timer with Timer() as urlsplit_timer: urlsplit('http://www.fuzzingbook.org/', 'http', True) urlsplit_timer.elapsed_time() with Timer() as webbrowser_timer: webbrowser("http://www.fuzzingbook.org") webbrowser_timer.elapsed_time() webbrowser_timer.elapsed_time() / urlsplit_timer.elapsed_time() ###Output _____no_output_____ ###Markdown But then again, the caveats encountered during carving apply, notably the requirement to recreate the original function environment. If we also alter or recombine arguments, we get the additional risk of _violating an implicit precondition_ – that is, invoking a function with arguments the function was never designed for. Such _false alarms_, resulting from incorrect invocations rather than incorrect implementations, must then be identified (typically manually) and wed out (for instance, by altering or constraining the grammar). The huge speed gains at the API level, however, may well justify this additional investment. SynopsisThis chapter provides means to _record and replay function calls_ during a system test. Since individual function calls are much faster than a whole system run, such "carving" mechanisms have the potential to run tests much faster. Recording CallsThe `CallCarver` class records all calls occurring while it is active. It is used in conjunction with a `with` clause: ###Code with CallCarver() as carver: y = my_sqrt(2) y = my_sqrt(4) ###Output _____no_output_____ ###Markdown After execution, `called_functions()` lists the names of functions encountered: ###Code carver.called_functions() ###Output _____no_output_____ ###Markdown The `arguments()` method lists the arguments recorded for a function. This is a mapping of the function name to a list of lists of arguments; each argument is a pair (parameter name, value). ###Code carver.arguments('my_sqrt') ###Output _____no_output_____ ###Markdown Complex arguments are properly serialized, such that they can be easily restored. Synthesizing CallsWhile such recorded arguments already could be turned into arguments and calls, a much nicer alternative is to create a _grammar_ for recorded calls. This allows to synthesize arbitrary _combinations_ of arguments, and also offers a base for further customization of calls. The `CallGrammarMiner` class turns a list of carved executions into a grammar. ###Code my_sqrt_miner = CallGrammarMiner(carver) my_sqrt_grammar = my_sqrt_miner.mine_call_grammar() my_sqrt_grammar ###Output _____no_output_____ ###Markdown This grammar can be used to synthesize calls. ###Code fuzzer = GrammarCoverageFuzzer(my_sqrt_grammar) fuzzer.fuzz() ###Output _____no_output_____ ###Markdown These calls can be executed in isolation, effectively extracting unit tests from system tests: ###Code eval(fuzzer.fuzz()) ###Output _____no_output_____ ###Markdown Lessons Learned* _Carving_ allows for effective replay of function calls recorded during a system test.* A function call can be _orders of magnitude faster_ than a system invocation.* _Serialization_ allows to create persistent representations of complex objects.* Functions that heavily interact with their environment and/or access external resources are difficult to carve.* From carved calls, one can produce API grammars that arbitrarily combine carved arguments. Next StepsIn the next chapter, we will discuss [how to reduce failure-inducing inputs](Reducer.ipynb). BackgroundCarving was invented by Elbaum et al. \cite{Elbaum2006} and originally implemented for Java. In this chapter, we follow several of their design choices (including recording and serializing method arguments only).The combination of carving and fuzzing at the API level is described in \cite{Kampmann2018}. Exercises Exercise 1: Carving for Regression TestingSo far, during carving, we only have looked into reproducing _calls_, but not into actually checking the _results_ of these calls. This is important for _regression testing_ – i.e. checking whether a change to code does not impede existing functionality. We can build this by recording not only _calls_, but also _return values_ – and then later compare whether the same calls result in the same values. This may not work on all occasions; values that depend on time, randomness, or other external factors may be different. Still, for functionality that abstracts from these details, checking that nothing has changed is an important part of testing. Our aim is to design a class `ResultCarver` that extends `CallCarver` by recording both calls and return values.In a first step, create a `traceit()` method that also tracks return values by extending the `traceit()` method. The `traceit()` event type is `"return"` and the `arg` parameter is the returned value. Here is a prototype that only prints out the returned values: ###Code class ResultCarver(CallCarver): def traceit(self, frame, event, arg): if event == "return": if self._log: print("Result:", arg) super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) ###Output my_sqrt(x=2) Result: 1.414213562373095 __exit__(tb=None, exc_value=None, exc_type=None, self=<__main__.ResultCarver object at 0x123807dc0>) ###Markdown Part 1: Store function resultsExtend the above code such that results are _stored_ in a way that associates them with the currently returning function (or method). To this end, you need to keep track of the _current stack of called functions_. **Solution.** Here's a solution, building on the above: ###Code class ResultCarver(CallCarver): def reset(self): super().reset() self._call_stack = [] self._results = {} def add_result(self, function_name, arguments, result): key = simple_call_string(function_name, arguments) self._results[key] = result def traceit(self, frame, event, arg): if event == "call": code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) self._call_stack.append( (function_name, qualified_name, get_arguments(frame))) if event == "return": result = arg (function_name, qualified_name, arguments) = self._call_stack.pop() self.add_result(function_name, arguments, result) if function_name != qualified_name: self.add_result(qualified_name, arguments, result) if self._log: print( simple_call_string( function_name, arguments), "=", result) # Keep on processing current calls super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) result_carver._results ###Output my_sqrt(x=2) my_sqrt(x=2) = 1.414213562373095 __exit__(tb=None, exc_value=None, exc_type=None, self=<__main__.ResultCarver object at 0x1238074f0>) ###Markdown Part 2: Access results Give it a method `result()` that returns the value recorded for that particular function name and result:```pythonclass ResultCarver(CallCarver): def result(self, function_name, argument): """Returns the result recorded for function_name(argument"""``` **Solution.** This is mostly done in the code for part 1: ###Code class ResultCarver(ResultCarver): def result(self, function_name, argument): key = simple_call_string(function_name, arguments) return self._results[key] ###Output _____no_output_____ ###Markdown Part 3: Produce assertionsFor the functions called during `webbrowser()` execution, create a set of _assertions_ that check whether the result returned is still the same. Test this for `urllib.parse.urlparse()` and `urllib.parse.urlsplit()`. **Solution.** Not too hard now: ###Code with ResultCarver() as webbrowser_result_carver: webbrowser("http://www.example.com") for function_name in ["urllib.parse.urlparse", "urllib.parse.urlsplit"]: for arguments in webbrowser_result_carver.arguments(function_name): try: call = call_string(function_name, arguments) result = webbrowser_result_carver.result(function_name, arguments) print("assert", call, "==", call_value(result)) except Exception: continue ###Output assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com') == ParseResult(scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com/') == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com/') == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com/') == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com/') == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com/') == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com/') == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com/') == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com/') == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com/') == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com/') == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com/') == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com/') == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(allow_fragments=True, scheme='', url='http://www.example.com/') == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com') == SplitResult(scheme='http', netloc='www.example.com', path='', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(allow_fragments=True, scheme='', url='http://www.example.com/') == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') ###Markdown We can run these assertions: ###Code from urllib.parse import SplitResult, ParseResult, urlparse, urlsplit assert urlparse( url='http://www.example.com', scheme='', allow_fragments=True) == ParseResult( scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urlsplit( url='http://www.example.com', scheme='', allow_fragments=True) == SplitResult( scheme='http', netloc='www.example.com', path='', query='', fragment='') ###Output _____no_output_____ ###Markdown Carving Unit TestsSo far, we have always generated _system input_, i.e. data that the program as a whole obtains via its input channels. If we are interested in testing only a small set of functions, having to go through the system can be very inefficient. This chapter introduces a technique known as _carving_, which, given a system test, automatically extracts a set of _unit tests_ that replicate the calls seen during the unit test. The key idea is to _record_ such calls such that we can _replay_ them later – as a whole or selectively. **Prerequisites*** Carving makes use of dynamic traces of function calls and variables, as introduced in the [chapter on configuration fuzzing](ConfigurationFuzzer.ipynb). System Tests vs Unit TestsRemember the URL grammar introduced for [grammar fuzzing](Grammars.ipynb)? With such a grammar, we can happily test a Web browser again and again, checking how it reacts to arbitrary page requests.Let us define a very simple "web browser" that goes and downloads the content given by the URL. ###Code import urllib.parse import urllib.request import fuzzingbook_utils def webbrowser(url): """Download the http/https resource given by the URL""" response = urllib.request.urlopen(url) if response.getcode() == 200: contents = response.read() return contents ###Output _____no_output_____ ###Markdown Let us apply this on [fuzzingboook.org](https://www.fuzzingbook.org/) and measure the time, using the [Timer class](Timer.ipynb): ###Code from Timer import Timer with Timer() as webbrowser_timer: fuzzingbook_contents = webbrowser( "http://www.fuzzingbook.org/html/Fuzzer.html") print("Downloaded %d bytes in %.2f seconds" % (len(fuzzingbook_contents), webbrowser_timer.elapsed_time())) fuzzingbook_contents[:100] ###Output _____no_output_____ ###Markdown Having to start a whole browser (or having it render a Web page) again and again means lots of overhead, though – in particular if we want to test only a subset of its functionality. In particular, after a change in the code, we would prefer to test only the subset of functions that is affected by the change, rather than running the well-tested functions again and again. Let us assume we change the function that takes care of parsing the given URL and decomposing it into the individual elements – the scheme ("http"), the network location (`"www.fuzzingbook.com"`), or the path (`"/html/Fuzzer.html"`). This function is named `urlparse()`: ###Code from urllib.parse import urlparse urlparse('https://www.fuzzingbook.com/html/Carver.html') ###Output _____no_output_____ ###Markdown You see how the individual elements of the URL – the _scheme_ (`"http"`), the _network location_ (`"www.fuzzingbook.com"`), or the path (`"//html/Carver.html"`) are all properly identified. Other elements (like `params`, `query`, or `fragment`) are empty, because they were not part of our input. The interesting thing is that executing only `urlparse()` is orders of magnitude faster than running all of `webbrowser()`. Let us measure the factor: ###Code runs = 1000 with Timer() as urlparse_timer: for i in range(runs): urlparse('https://www.fuzzingbook.com/html/Carver.html') avg_urlparse_time = urlparse_timer.elapsed_time() / 1000 avg_urlparse_time ###Output _____no_output_____ ###Markdown Compare this to the time required by the webbrowser ###Code webbrowser_timer.elapsed_time() ###Output _____no_output_____ ###Markdown The difference in time is huge: ###Code webbrowser_timer.elapsed_time() / avg_urlparse_time ###Output _____no_output_____ ###Markdown Hence, in the time it takes to run `webbrowser()` once, we can have _tens of thousands_ of executions of `urlparse()` – and this does not even take into account the time it takes the browser to render the downloaded HTML, to run the included scripts, and whatever else happens when a Web page is loaded. Hence, strategies that allow us to test at the _unit_ level are very promising as they can save lots of overhead. Carving Unit TestsTesting methods and functions at the unit level requires a very good understanding of the individual units to be tested as well as their interplay with other units. Setting up an appropriate infrastructure and writing unit tests by hand thus is demanding, yet rewarding. There is, however, an interesting alternative to writing unit tests by hand. The technique of _carving_ automatically _converts system tests into unit tests_ by means of recording and replaying function calls:1. During a system test (given or generated), we _record_ all calls into a function, including all arguments and other variables the function reads.2. From these, we synthesize a self-contained _unit test_ that reconstructs the function call with all arguments.3. This unit test can be executed (replayed) at any time with high efficiency.In the remainder of this chapter, let us explore these steps. Recording CallsOur first challenge is to record function calls together with their arguments. (In the interest of simplicity, we restrict ourself to arguments, ignoring any global variables or other non-arguments that are read by the function.) To record calls and arguments, we use the mechanism [we introduced for coverage](Coverage.ipynb): By setting up a tracer function, we track all calls into individual functions, also saving their arguments. Just like `Coverage` objects, we want to use `Carver` objects to be able to be used in conjunction with the `with` statement, such that we can trace a particular code block:```pythonwith Carver() as carver: function_to_be_traced()c = carver.calls()```The initial definition supports this construct: ###Code import sys class Carver(object): def __init__(self, log=False): self._log = log self.reset() def reset(self): self._calls = {} # Start of `with` block def __enter__(self): self.original_trace_function = sys.gettrace() sys.settrace(self.traceit) return self # End of `with` block def __exit__(self, exc_type, exc_value, tb): sys.settrace(self.original_trace_function) ###Output _____no_output_____ ###Markdown The actual work takes place in the `traceit()` method, which records all calls in the `_calls` attribute. First, we define two helper functions: ###Code import inspect def get_qualified_name(code): """Return the fully qualified name of the current function""" name = code.co_name module = inspect.getmodule(code) if module is not None: name = module.__name__ + "." + name return name def get_arguments(frame): """Return call arguments in the given frame""" # When called, all arguments are local variables arguments = [(var, frame.f_locals[var]) for var in frame.f_locals] arguments.reverse() # Want same order as call return arguments class CallCarver(Carver): def add_call(self, function_name, arguments): """Add given call to list of calls""" if function_name not in self._calls: self._calls[function_name] = [] self._calls[function_name].append(arguments) # Tracking function: Record all calls and all args def traceit(self, frame, event, arg): if event != "call": return None code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) arguments = get_arguments(frame) self.add_call(function_name, arguments) if qualified_name != function_name: self.add_call(qualified_name, arguments) if self._log: print(simple_call_string(function_name, arguments)) return None ###Output _____no_output_____ ###Markdown Finally, we need some convenience functions to access the calls: ###Code class CallCarver(CallCarver): def calls(self): """Return a dictionary of all calls traced.""" return self._calls def arguments(self, function_name): """Return a list of all arguments of the given function as (VAR, VALUE) pairs. Raises an exception if the function was not traced.""" return self._calls[function_name] def called_functions(self, qualified=False): """Return all functions called.""" if qualified: return [function_name for function_name in self._calls.keys() if function_name.find('.') >= 0] else: return [function_name for function_name in self._calls.keys() if function_name.find('.') < 0] ###Output _____no_output_____ ###Markdown Recording my_sqrt() Let's try out our new `Carver` class – first on a very simple function: ###Code from Intro_Testing import my_sqrt with CallCarver() as sqrt_carver: my_sqrt(2) my_sqrt(4) ###Output _____no_output_____ ###Markdown We can retrieve all calls seen... ###Code sqrt_carver.calls() sqrt_carver.called_functions() ###Output _____no_output_____ ###Markdown ... as well as the arguments of a particular function: ###Code sqrt_carver.arguments("my_sqrt") ###Output _____no_output_____ ###Markdown We define a convenience function for nicer printing of these lists: ###Code def simple_call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string""" return function_name + "(" + \ ", ".join([var + "=" + repr(value) for (var, value) in argument_list]) + ")" for function_name in sqrt_carver.called_functions(): for argument_list in sqrt_carver.arguments(function_name): print(simple_call_string(function_name, argument_list)) ###Output my_sqrt(x=2) my_sqrt(x=4) __exit__(self=<__main__.CallCarver object at 0x10de3f470>, exc_type=None, exc_value=None, tb=None) ###Markdown This is a syntax we can directly use to invoke `my_sqrt()` again: ###Code eval("my_sqrt(x=2)") ###Output _____no_output_____ ###Markdown Carving urlparse() What happens if we apply this to `webbrowser()`? ###Code with CallCarver() as webbrowser_carver: webbrowser("http://www.example.com") ###Output _____no_output_____ ###Markdown We see that retrieving a URL from the Web requires quite some functionality: ###Code print(webbrowser_carver.called_functions(qualified=True)) ###Output ['urllib.request.urlopen', 'urllib.request.open', 'urllib.request.__init__', 'urllib.request.full_url', 'urllib.parse.unwrap', 'urllib.parse.splittag', 'urllib.request._parse', 'urllib.parse.splittype', 'urllib.parse.splithost', 'urllib.parse.unquote', 'urllib.request.data', 'urllib.request.request_host', 'urllib.parse.urlparse', 'urllib.parse._coerce_args', 'urllib.parse.urlsplit', 'urllib.parse._splitnetloc', 'urllib.parse._noop', 'urllib.request.do_request_', 'urllib.request.has_proxy', 'urllib.request.has_header', 'urllib.request.add_unredirected_header', 'urllib.request._open', 'urllib.request._call_chain', 'urllib.request.http_open', 'urllib.request.do_open', 'http.client.__init__', 'http.client._get_hostport', 'http.client.set_debuglevel', 'urllib.request.<genexpr>', 'urllib.request.get_method', 'http.client.request', 'http.client._send_request', 'http.client.<genexpr>', 'http.client.putrequest', 'http.client._output', 'http.client.putheader', 'http.client._get_content_length', 'http.client.endheaders', 'http.client._send_output', 'http.client.send', 'http.client.connect', 'socket.create_connection', 'socket.getaddrinfo', 'encodings.idna.encode', 'socket._intenum_converter', 'enum.__call__', 'enum.__new__', 'socket.__init__', 'http.client.getresponse', 'socket.makefile', 'socket.readable', 'http.client.begin', 'http.client._read_status', 'socket.readinto', 'http.client.parse_headers', 'email.parser.__init__', 'email.parser.parsestr', 'email.parser.parse', 'email.feedparser.__init__', 'email.message.__init__', 'email.feedparser.feed', 'email.feedparser.push', 'email.feedparser.pushlines', 'email.feedparser._call_parse', 'email.feedparser._parsegen', 'email.feedparser._new_message', 'email.feedparser.__iter__', 'email.feedparser.__next__', 'email.feedparser.readline', 'email.feedparser._parse_headers', 'email._policybase.header_source_parse', 'email.message.set_raw', 'email.message.get_content_type', 'email.message.get', 'email._policybase.header_fetch_parse', 'email._policybase._sanitize_header', 'email.utils._has_surrogates', 'email.message._splitparam', 'email.message.get_content_maintype', 'email.feedparser.close', 'email.message.set_payload', 'email.feedparser._pop_message', 'http.client._check_close', 'http.client.close', 'socket.close', 'urllib.request.get_full_url', 'urllib.request.http_response', 'http.client.info', 'http.client.getcode', 'http.client.read', 'http.client._safe_read', 'http.client._close_conn', 'socket._decref_socketios', 'socket._real_close'] ###Markdown Among several other functions, we also have a call to `urlparse()`: ###Code urlparse_argument_list = webbrowser_carver.arguments("urllib.parse.urlparse") urlparse_argument_list ###Output _____no_output_____ ###Markdown Again, we can convert this into a well-formatted call: ###Code urlparse_call = simple_call_string("urlparse", urlparse_argument_list[0]) urlparse_call ###Output _____no_output_____ ###Markdown Again, we can re-execute this call: ###Code eval(urlparse_call) ###Output _____no_output_____ ###Markdown We now have successfully carved the call to `urlparse()` out of the `webbrowser()` execution. Replaying Calls Replaying calls in their entirety and in all generality is tricky, as there are several challenges to be addressed. These include:1. We need to be able to _access_ individual functions. If we access a function by name, the name must be in scope. If the name is not visible (for instance, because it is a name internal to the module), we must make it visible.2. Any _resources_ accessed outside of arguments must be recorded and reconstructed for replay as well. This can be difficult if variables refer to external resources such as files or network resources.3. _Complex objects_ must be reconstructed as well.These constraints make carving hard or even impossible if the function to be tested interacts heavily with its environment. To illustrate these issues, consider the `email.parser.parse()` method that is invoked in `webbrowser()`: ###Code email_parse_argument_list = webbrowser_carver.arguments("email.parser.parse") ###Output _____no_output_____ ###Markdown Calls to this method look like this: ###Code email_parse_call = simple_call_string( "email.parser.parse", email_parse_argument_list[0]) email_parse_call ###Output _____no_output_____ ###Markdown We see that `email.parser.parse()` is part of a `email.parser.Parser` object and it gets a `StringIO` object. Both are non-primitive values. How could we possibly reconstruct them? Serializing ObjectsThe answer to the problem of complex objects lies in creating a _persistent_ representation that can be _reconstructed_ at later points in time. This process is known as _serialization_; in Python, it is also known as _pickling_. The `pickle` module provides means to create a serialized representation of an object. Let us apply this on the `email.parser.Parser` object we just found: ###Code import pickle parser_object = email_parse_argument_list[0][0][1] parser_object pickled = pickle.dumps(parser_object) pickled ###Output _____no_output_____ ###Markdown From this string representing the serialized `email.parser.Parser` object, we can recreate the Parser object at any time: ###Code unpickled_parser_object = pickle.loads(pickled) unpickled_parser_object ###Output _____no_output_____ ###Markdown The serialization mechanism allows us to produce a representation for all objects passed as parameters (assuming they can be pickled, that is). We can now extend the `simple_call_string()` function such that it automatically pickles objects. Additionally, we set it up such that if the first parameter is named `self` (i.e., it is a class method), we make it a method of the `self` object. ###Code def call_value(value): value_as_string = repr(value) if value_as_string.find('<') >= 0: # Complex object value_as_string = "pickle.loads(" + repr(pickle.dumps(value)) + ")" return value_as_string def call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string, pickling complex objects""" if len(argument_list) > 0: (first_var, first_value) = argument_list[0] if first_var == "self": # Make this a method call method_name = function_name.split(".")[-1] function_name = call_value(first_value) + "." + method_name argument_list = argument_list[1:] return function_name + "(" + \ ", ".join([var + "=" + call_value(value) for (var, value) in argument_list]) + ")" ###Output _____no_output_____ ###Markdown Let us apply the extended `call_string()` method to create a call for `email.parser.parse()`, including pickled objects: ###Code call = call_string("email.parser.parse", email_parse_argument_list[0]) print(call) ###Output pickle.loads(b'\x80\x03cemail.parser\nParser\nq\x00)\x81q\x01}q\x02(X\x06\x00\x00\x00_classq\x03chttp.client\nHTTPMessage\nq\x04X\x06\x00\x00\x00policyq\x05cemail._policybase\nCompat32\nq\x06)\x81q\x07ub.').parse(fp=pickle.loads(b'\x80\x03c_io\nStringIO\nq\x00)\x81q\x01(XD\x01\x00\x00Cache-Control: max-age=604800\r\nContent-Type: text/html; charset=UTF-8\r\nDate: Tue, 13 Nov 2018 17:22:37 GMT\r\nEtag: "1541025663+ident"\r\nExpires: Tue, 20 Nov 2018 17:22:37 GMT\r\nLast-Modified: Fri, 09 Aug 2013 23:54:35 GMT\r\nServer: ECS (dca/53DB)\r\nVary: Accept-Encoding\r\nX-Cache: HIT\r\nContent-Length: 1270\r\nConnection: close\r\n\r\nq\x02X\x01\x00\x00\x00\nq\x03MD\x01Ntq\x04b.'), headersonly=False) ###Markdown With this call involvimng the pickled object, we can now re-run the original call and obtain a valid result: ###Code eval(call) ###Output _____no_output_____ ###Markdown All CallsSo far, we have seen only one call of `webbrowser()`. How many of the calls within `webbrowser()` can we actually carve and replay? Let us try this out and compute the numbers. ###Code import traceback import enum import socket all_functions = set(webbrowser_carver.called_functions(qualified=True)) call_success = set() run_success = set() exceptions_seen = set() for function_name in webbrowser_carver.called_functions(qualified=True): for argument_list in webbrowser_carver.arguments(function_name): try: call = call_string(function_name, argument_list) call_success.add(function_name) result = eval(call) run_success.add(function_name) except Exception as exc: exceptions_seen.add(repr(exc)) # print("->", call, file=sys.stderr) # traceback.print_exc() # print("", file=sys.stderr) continue print("%d/%d calls (%.2f%%) successfully created and %d/%d calls (%.2f%%) successfully ran" % ( len(call_success), len(all_functions), len( call_success) * 100 / len(all_functions), len(run_success), len(all_functions), len(run_success) * 100 / len(all_functions))) ###Output 82/94 calls (87.23%) successfully created and 46/94 calls (48.94%) successfully ran ###Markdown About half of the calls succeed. Let us take a look into some of the error messages we get: ###Code for i in range(10): print(list(exceptions_seen)[i]) ###Output SyntaxError('invalid syntax', ('<string>', 1, 13, "http.client.<genexpr>(k='Host', .0=pickle.loads(b'\\x80\\x03cbuiltins\\niter\\nq\\x00]q\\x01\\x85q\\x02Rq\\x03.'))")) ResponseNotReady('Idle',) TypeError("putheader() got an unexpected keyword argument 'values'",) CannotSendHeader() AttributeError("module 'enum' has no attribute '__call__'",) SyntaxError('invalid syntax', ('<string>', 1, 13, "http.client.<genexpr>(.0=pickle.loads(b'\\x80\\x03cbuiltins\\niter\\nq\\x00]q\\x01\\x85q\\x02Rq\\x03.'))")) TypeError("can't pickle memoryview objects",) AttributeError("'NoneType' object has no attribute 'read'",) NameError("name 'Compat32' is not defined",) SyntaxError('invalid syntax', ('<string>', 1, 16, "urllib.request.<genexpr>(name='Host', val='www.example.com', .0=pickle.loads(b'\\x80\\x03cbuiltins\\niter\\nq\\x00]q\\x01\\x85q\\x02Rq\\x03.'))")) ###Markdown We see that:* **A large majority of calls could be converted into call strings.** If this is not the case, this is mostly due to having unserialized objects being passed.* **About half of the calls could be executed.** The error messages for the failing runs are varied; the most frequent being that some internal name is invoked that is not in scope. Our carving mechanism should be taken with a grain of salt: We still do not cover the situation where external variables and values (such as global variables) are being accessed, and the serialization mechanism cannot recreate external resources. Still, if the function of interest falls among those that _can_ be carved and replayed, we can very effectively re-run its calls with their original arguments. Lessons Learned* _Carving_ allows for effective replay of function calls recorded during a system test* A function call can be _orders of magnitude faster_ than a system invocation.* _Serialization_ allows to create persistent representations of complex objects.* Functions that heavily interact with their environment and/or access external resources are difficult to carve. Next StepsThe following chapters make use of the concepts defined here:* In the chapter on [fuzzing APIs](APIFuzzer.ipynb), we discuss how to use carving to _fuzz functions with combinations of carved and newly generated values_. This effectively joins the strengths of carving and fuzzing. BackgroundCarving was invented by Elbaum et al. \cite{Elbaum2006} and originally implemented for Java. In this chapter, we follow several of their design choices (including recording and serializing method arguments only). Exercises Exercise 1: Carving for Regression TestingSo far, during carving, we only have looked into reproducing _calls_, but not into actually checking the _results_ of these calls. This is important for _regression testing_ – i.e. checking whether a change to code does not impede existing functionality. We can build this by recording not only _calls_, but also _return values_ – and then later compare whether the same calls result in the same values. This may not work on all occasions; values that depend on time, randomness, or other external factors may be different. Still, for functionality that abstracts from these details, checking that nothing has changed is an important part of testing. Our aim is to design a class `ResultCarver` that extends `CallCarver` by recording both calls and return values.In a first step, create a `traceit()` method that also tracks return values by extending the `traceit()` method. The `traceit()` event type is `"return"` and the `arg` parameter is the returned value. Here is a prototype that only prints out the returned values: ###Code class ResultCarver(CallCarver): def traceit(self, frame, event, arg): if event == "return": if self._log: print("Result:", arg) super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) ###Output my_sqrt(x=2) Result: 1.414213562373095 __exit__(self=<__main__.ResultCarver object at 0x10de0dfd0>, exc_type=None, exc_value=None, tb=None) ###Markdown Part 1: Store function resultsExtend the above code such that results are _stored_ in a way that associates them with the currently returning function (or method). To this end, you need to keep track of the _current stack of called functions_. **Solution.** Here's a solution, building on the above: ###Code class ResultCarver(CallCarver): def reset(self): super().reset() self._call_stack = [] self._results = {} def add_result(self, function_name, arguments, result): key = simple_call_string(function_name, arguments) self._results[key] = result def traceit(self, frame, event, arg): if event == "call": code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) self._call_stack.append( (function_name, qualified_name, get_arguments(frame))) if event == "return": result = arg (function_name, qualified_name, arguments) = self._call_stack.pop() self.add_result(function_name, arguments, result) if function_name != qualified_name: self.add_result(qualified_name, arguments, result) if self._log: print( simple_call_string( function_name, arguments), "=", result) # Keep on processing current calls super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) result_carver._results ###Output my_sqrt(x=2) my_sqrt(x=2) = 1.414213562373095 __exit__(self=<__main__.ResultCarver object at 0x10de3fc50>, exc_type=None, exc_value=None, tb=None) ###Markdown Part 2: Access results Give it a method `result()` that returns the value recorded for that particular function name and result:```pythonclass ResultCarver(CallCarver): def result(self, function_name, argument): """Returns the result recorded for function_name(argument"""``` **Solution.** This is mostly done in the code for part 1: ###Code class ResultCarver(ResultCarver): def result(self, function_name, argument): key = simple_call_string(function_name, arguments) return self._results[key] ###Output _____no_output_____ ###Markdown Part 3: Produce assertionsFor the functions called during `webbrowser()` execution, create a set of _assertions_ that check whether the result returned is still the same. Test this for `urllib.parse.urlparse()` and `urllib.parse.urlsplit()`. **Solution.** Not too hard now: ###Code with ResultCarver() as webbrowser_result_carver: webbrowser("http://www.example.com") for function_name in ["urllib.parse.urlparse", "urllib.parse.urlsplit"]: for arguments in webbrowser_result_carver.arguments(function_name): try: call = call_string(function_name, arguments) result = webbrowser_result_carver.result(function_name, arguments) print("assert", call, "==", call_value(result)) except Exception: continue ###Output assert urllib.parse.urlparse(url='http://www.example.com', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='', query='', fragment='') ###Markdown We can run these assertions: ###Code from urllib.parse import SplitResult, ParseResult, urlparse, urlsplit assert urlparse( url='http://www.example.com', scheme='', allow_fragments=True) == ParseResult( scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urlsplit( url='http://www.example.com', scheme='', allow_fragments=True) == SplitResult( scheme='http', netloc='www.example.com', path='', query='', fragment='') ###Output _____no_output_____ ###Markdown Carving Unit TestsSo far, we have always generated _system input_, i.e. data that the program as a whole obtains via its input channels. If we are interested in testing only a small set of functions, having to go through the system can be very inefficient. This chapter introduces a technique known as _carving_, which, given a system test, automatically extracts a set of _unit tests_ that replicate the calls seen during the unit test. The key idea is to _record_ such calls such that we can _replay_ them later – as a whole or selectively. On top, we also explore how to synthesize API grammars from carved unit tests; this means that we can _synthesize API tests without having to write a grammar at all._ **Prerequisites*** Carving makes use of dynamic traces of function calls and variables, as introduced in the [chapter on configuration fuzzing](ConfigurationFuzzer.ipynb).* Using grammars to test units was introduced in the [chapter on API fuzzing](APIFuzzer.ipynb). ###Code import bookutils import APIFuzzer ###Output _____no_output_____ ###Markdown SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.Carver import ```and then make use of the following features.This chapter provides means to _record and replay function calls_ during a system test. Since individual function calls are much faster than a whole system run, such "carving" mechanisms have the potential to run tests much faster. Recording CallsThe `CallCarver` class records all calls occurring while it is active. It is used in conjunction with a `with` clause:```python>>> with CallCarver() as carver:>>> y = my_sqrt(2)>>> y = my_sqrt(4)```After execution, `called_functions()` lists the names of functions encountered:```python>>> carver.called_functions()['my_sqrt', '__exit__']```The `arguments()` method lists the arguments recorded for a function. This is a mapping of the function name to a list of lists of arguments; each argument is a pair (parameter name, value).```python>>> carver.arguments('my_sqrt')[[('x', 2)], [('x', 4)]]```Complex arguments are properly serialized, such that they can be easily restored. Synthesizing CallsWhile such recorded arguments already could be turned into arguments and calls, a much nicer alternative is to create a _grammar_ for recorded calls. This allows to synthesize arbitrary _combinations_ of arguments, and also offers a base for further customization of calls.The `CallGrammarMiner` class turns a list of carved executions into a grammar.```python>>> my_sqrt_miner = CallGrammarMiner(carver)>>> my_sqrt_grammar = my_sqrt_miner.mine_call_grammar()>>> my_sqrt_grammar{'': [''], '': [''], '': ['4', '2'], '': ['my_sqrt()']}```This grammar can be used to synthesize calls.```python>>> fuzzer = GrammarCoverageFuzzer(my_sqrt_grammar)>>> fuzzer.fuzz()'my_sqrt(2)'```These calls can be executed in isolation, effectively extracting unit tests from system tests:```python>>> eval(fuzzer.fuzz())2.0``` System Tests vs Unit TestsRemember the URL grammar introduced for [grammar fuzzing](Grammars.ipynb)? With such a grammar, we can happily test a Web browser again and again, checking how it reacts to arbitrary page requests.Let us define a very simple "web browser" that goes and downloads the content given by the URL. ###Code import urllib.parse def webbrowser(url): """Download the http/https resource given by the URL""" import requests # Only import if needed r = requests.get(url) return r.text ###Output _____no_output_____ ###Markdown Let us apply this on [fuzzingbook.org](https://www.fuzzingbook.org/) and measure the time, using the [Timer class](Timer.ipynb): ###Code from Timer import Timer with Timer() as webbrowser_timer: fuzzingbook_contents = webbrowser( "http://www.fuzzingbook.org/html/Fuzzer.html") print("Downloaded %d bytes in %.2f seconds" % (len(fuzzingbook_contents), webbrowser_timer.elapsed_time())) fuzzingbook_contents[:100] ###Output _____no_output_____ ###Markdown A full webbrowser, of course, would also render the HTML content. We can achieve this using these commands (but we don't, as we do not want to replicate the entire Web page here):```pythonfrom IPython.display import HTML, displayHTML(fuzzingbook_contents)``` Having to start a whole browser (or having it render a Web page) again and again means lots of overhead, though – in particular if we want to test only a subset of its functionality. In particular, after a change in the code, we would prefer to test only the subset of functions that is affected by the change, rather than running the well-tested functions again and again. Let us assume we change the function that takes care of parsing the given URL and decomposing it into the individual elements – the scheme ("http"), the network location (`"www.fuzzingbook.com"`), or the path (`"/html/Fuzzer.html"`). This function is named `urlparse()`: ###Code from urllib.parse import urlparse urlparse('https://www.fuzzingbook.com/html/Carver.html') ###Output _____no_output_____ ###Markdown You see how the individual elements of the URL – the _scheme_ (`"http"`), the _network location_ (`"www.fuzzingbook.com"`), or the path (`"//html/Carver.html"`) are all properly identified. Other elements (like `params`, `query`, or `fragment`) are empty, because they were not part of our input. The interesting thing is that executing only `urlparse()` is orders of magnitude faster than running all of `webbrowser()`. Let us measure the factor: ###Code runs = 1000 with Timer() as urlparse_timer: for i in range(runs): urlparse('https://www.fuzzingbook.com/html/Carver.html') avg_urlparse_time = urlparse_timer.elapsed_time() / 1000 avg_urlparse_time ###Output _____no_output_____ ###Markdown Compare this to the time required by the webbrowser ###Code webbrowser_timer.elapsed_time() ###Output _____no_output_____ ###Markdown The difference in time is huge: ###Code webbrowser_timer.elapsed_time() / avg_urlparse_time ###Output _____no_output_____ ###Markdown Hence, in the time it takes to run `webbrowser()` once, we can have _tens of thousands_ of executions of `urlparse()` – and this does not even take into account the time it takes the browser to render the downloaded HTML, to run the included scripts, and whatever else happens when a Web page is loaded. Hence, strategies that allow us to test at the _unit_ level are very promising as they can save lots of overhead. Carving Unit TestsTesting methods and functions at the unit level requires a very good understanding of the individual units to be tested as well as their interplay with other units. Setting up an appropriate infrastructure and writing unit tests by hand thus is demanding, yet rewarding. There is, however, an interesting alternative to writing unit tests by hand. The technique of _carving_ automatically _converts system tests into unit tests_ by means of recording and replaying function calls:1. During a system test (given or generated), we _record_ all calls into a function, including all arguments and other variables the function reads.2. From these, we synthesize a self-contained _unit test_ that reconstructs the function call with all arguments.3. This unit test can be executed (replayed) at any time with high efficiency.In the remainder of this chapter, let us explore these steps. Recording CallsOur first challenge is to record function calls together with their arguments. (In the interest of simplicity, we restrict ourself to arguments, ignoring any global variables or other non-arguments that are read by the function.) To record calls and arguments, we use the mechanism [we introduced for coverage](Coverage.ipynb): By setting up a tracer function, we track all calls into individual functions, also saving their arguments. Just like `Coverage` objects, we want to use `Carver` objects to be able to be used in conjunction with the `with` statement, such that we can trace a particular code block:```pythonwith Carver() as carver: function_to_be_traced()c = carver.calls()```The initial definition supports this construct: \todo{Get tracker from [dynamic invariants](DynamicInvariants.ipynb)} ###Code import sys class Carver(object): def __init__(self, log=False): self._log = log self.reset() def reset(self): self._calls = {} # Start of `with` block def __enter__(self): self.original_trace_function = sys.gettrace() sys.settrace(self.traceit) return self # End of `with` block def __exit__(self, exc_type, exc_value, tb): sys.settrace(self.original_trace_function) ###Output _____no_output_____ ###Markdown Carving Unit TestsSo far, we have always generated _system input_, i.e. data that the program as a whole obtains via its input channels. If we are interested in testing only a small set of functions, having to go through the system can be very inefficient. This chapter introduces a technique known as _carving_, which, given a system test, automatically extracts a set of _unit tests_ that replicate the calls seen during the unit test. The key idea is to _record_ such calls such that we can _replay_ them later – as a whole or selectively. On top, we also explore how to synthesize API grammars from carved unit tests; this means that we can _synthesize API tests without having to write a grammar at all._ **Prerequisites*** Carving makes use of dynamic traces of function calls and variables, as introduced in the [chapter on configuration fuzzing](ConfigurationFuzzer.ipynb).* Using grammars to test units was introduced in the [chapter on API fuzzing](APIFuzzer.ipynb). ###Code import fuzzingbook_utils import APIFuzzer ###Output _____no_output_____ ###Markdown SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.Carver import ```and then make use of the following features.This chapter provides means to _record and replay function calls_ during a system test. Since individual function calls are much faster than a whole system run, such "carving" mechanisms have the potential to run tests much faster. Recording CallsThe `CallCarver` class records all calls occurring while it is active. It is used in conjunction with a `with` clause:```python>>> with CallCarver() as carver:>>> y = my_sqrt(2)>>> y = my_sqrt(4)```After execution, `called_functions()` lists the names of functions encountered:```python>>> carver.called_functions()['my_sqrt', '__exit__']```The `arguments()` method lists the arguments recorded for a function. This is a mapping of the function name to a list of lists of arguments; each argument is a pair (parameter name, value).```python>>> carver.arguments('my_sqrt')[[('x', 2)], [('x', 4)]]```Complex arguments are properly serialized, such that they can be easily restored. Synthesizing CallsWhile such recorded arguments already could be turned into arguments and calls, a much nicer alternative is to create a _grammar_ for recorded calls. This allows to synthesize arbitrary _combinations_ of arguments, and also offers a base for further customization of calls.The `CallGrammarMiner` class turns a list of carved executions into a grammar.```python>>> my_sqrt_miner = CallGrammarMiner(carver)>>> my_sqrt_grammar = my_sqrt_miner.mine_call_grammar()>>> my_sqrt_grammar{'': [''], '': [''], '': ['2', '4'], '': ['my_sqrt()']}```This grammar can be used to synthesize calls.```python>>> fuzzer = GrammarCoverageFuzzer(my_sqrt_grammar)>>> fuzzer.fuzz()'my_sqrt(4)'```These calls can be executed in isolation, effectively extracting unit tests from system tests:```python>>> eval(fuzzer.fuzz())1.414213562373095``` System Tests vs Unit TestsRemember the URL grammar introduced for [grammar fuzzing](Grammars.ipynb)? With such a grammar, we can happily test a Web browser again and again, checking how it reacts to arbitrary page requests.Let us define a very simple "web browser" that goes and downloads the content given by the URL. ###Code import urllib.parse def webbrowser(url): """Download the http/https resource given by the URL""" import requests # Only import if needed r = requests.get(url) return r.text ###Output _____no_output_____ ###Markdown Let us apply this on [fuzzingbook.org](https://www.fuzzingbook.org/) and measure the time, using the [Timer class](Timer.ipynb): ###Code from Timer import Timer with Timer() as webbrowser_timer: fuzzingbook_contents = webbrowser( "http://www.fuzzingbook.org/html/Fuzzer.html") print("Downloaded %d bytes in %.2f seconds" % (len(fuzzingbook_contents), webbrowser_timer.elapsed_time())) fuzzingbook_contents[:100] ###Output _____no_output_____ ###Markdown A full webbrowser, of course, would also render the HTML content. We can achieve this using these commands (but we don't, as we do not want to replicate the entire Web page here):```pythonfrom IPython.display import HTML, displayHTML(fuzzingbook_contents)``` Having to start a whole browser (or having it render a Web page) again and again means lots of overhead, though – in particular if we want to test only a subset of its functionality. In particular, after a change in the code, we would prefer to test only the subset of functions that is affected by the change, rather than running the well-tested functions again and again. Let us assume we change the function that takes care of parsing the given URL and decomposing it into the individual elements – the scheme ("http"), the network location (`"www.fuzzingbook.com"`), or the path (`"/html/Fuzzer.html"`). This function is named `urlparse()`: ###Code from urllib.parse import urlparse urlparse('https://www.fuzzingbook.com/html/Carver.html') ###Output _____no_output_____ ###Markdown You see how the individual elements of the URL – the _scheme_ (`"http"`), the _network location_ (`"www.fuzzingbook.com"`), or the path (`"//html/Carver.html"`) are all properly identified. Other elements (like `params`, `query`, or `fragment`) are empty, because they were not part of our input. The interesting thing is that executing only `urlparse()` is orders of magnitude faster than running all of `webbrowser()`. Let us measure the factor: ###Code runs = 1000 with Timer() as urlparse_timer: for i in range(runs): urlparse('https://www.fuzzingbook.com/html/Carver.html') avg_urlparse_time = urlparse_timer.elapsed_time() / 1000 avg_urlparse_time ###Output _____no_output_____ ###Markdown Compare this to the time required by the webbrowser ###Code webbrowser_timer.elapsed_time() ###Output _____no_output_____ ###Markdown The difference in time is huge: ###Code webbrowser_timer.elapsed_time() / avg_urlparse_time ###Output _____no_output_____ ###Markdown Hence, in the time it takes to run `webbrowser()` once, we can have _tens of thousands_ of executions of `urlparse()` – and this does not even take into account the time it takes the browser to render the downloaded HTML, to run the included scripts, and whatever else happens when a Web page is loaded. Hence, strategies that allow us to test at the _unit_ level are very promising as they can save lots of overhead. Carving Unit TestsTesting methods and functions at the unit level requires a very good understanding of the individual units to be tested as well as their interplay with other units. Setting up an appropriate infrastructure and writing unit tests by hand thus is demanding, yet rewarding. There is, however, an interesting alternative to writing unit tests by hand. The technique of _carving_ automatically _converts system tests into unit tests_ by means of recording and replaying function calls:1. During a system test (given or generated), we _record_ all calls into a function, including all arguments and other variables the function reads.2. From these, we synthesize a self-contained _unit test_ that reconstructs the function call with all arguments.3. This unit test can be executed (replayed) at any time with high efficiency.In the remainder of this chapter, let us explore these steps. Recording CallsOur first challenge is to record function calls together with their arguments. (In the interest of simplicity, we restrict ourself to arguments, ignoring any global variables or other non-arguments that are read by the function.) To record calls and arguments, we use the mechanism [we introduced for coverage](Coverage.ipynb): By setting up a tracer function, we track all calls into individual functions, also saving their arguments. Just like `Coverage` objects, we want to use `Carver` objects to be able to be used in conjunction with the `with` statement, such that we can trace a particular code block:```pythonwith Carver() as carver: function_to_be_traced()c = carver.calls()```The initial definition supports this construct: \todo{Get tracker from [dynamic invariants](DynamicInvariants.ipynb)} ###Code import sys class Carver(object): def __init__(self, log=False): self._log = log self.reset() def reset(self): self._calls = {} # Start of `with` block def __enter__(self): self.original_trace_function = sys.gettrace() sys.settrace(self.traceit) return self # End of `with` block def __exit__(self, exc_type, exc_value, tb): sys.settrace(self.original_trace_function) ###Output _____no_output_____ ###Markdown The actual work takes place in the `traceit()` method, which records all calls in the `_calls` attribute. First, we define two helper functions: ###Code import inspect def get_qualified_name(code): """Return the fully qualified name of the current function""" name = code.co_name module = inspect.getmodule(code) if module is not None: name = module.__name__ + "." + name return name def get_arguments(frame): """Return call arguments in the given frame""" # When called, all arguments are local variables arguments = [(var, frame.f_locals[var]) for var in frame.f_locals] arguments.reverse() # Want same order as call return arguments class CallCarver(Carver): def add_call(self, function_name, arguments): """Add given call to list of calls""" if function_name not in self._calls: self._calls[function_name] = [] self._calls[function_name].append(arguments) # Tracking function: Record all calls and all args def traceit(self, frame, event, arg): if event != "call": return None code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) arguments = get_arguments(frame) self.add_call(function_name, arguments) if qualified_name != function_name: self.add_call(qualified_name, arguments) if self._log: print(simple_call_string(function_name, arguments)) return None ###Output _____no_output_____ ###Markdown Finally, we need some convenience functions to access the calls: ###Code class CallCarver(CallCarver): def calls(self): """Return a dictionary of all calls traced.""" return self._calls def arguments(self, function_name): """Return a list of all arguments of the given function as (VAR, VALUE) pairs. Raises an exception if the function was not traced.""" return self._calls[function_name] def called_functions(self, qualified=False): """Return all functions called.""" if qualified: return [function_name for function_name in self._calls.keys() if function_name.find('.') >= 0] else: return [function_name for function_name in self._calls.keys() if function_name.find('.') < 0] ###Output _____no_output_____ ###Markdown Recording my_sqrt() Let's try out our new `Carver` class – first on a very simple function: ###Code from Intro_Testing import my_sqrt with CallCarver() as sqrt_carver: my_sqrt(2) my_sqrt(4) ###Output _____no_output_____ ###Markdown We can retrieve all calls seen... ###Code sqrt_carver.calls() sqrt_carver.called_functions() ###Output _____no_output_____ ###Markdown ... as well as the arguments of a particular function: ###Code sqrt_carver.arguments("my_sqrt") ###Output _____no_output_____ ###Markdown We define a convenience function for nicer printing of these lists: ###Code def simple_call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string""" return function_name + "(" + \ ", ".join([var + "=" + repr(value) for (var, value) in argument_list]) + ")" for function_name in sqrt_carver.called_functions(): for argument_list in sqrt_carver.arguments(function_name): print(simple_call_string(function_name, argument_list)) ###Output my_sqrt(x=2) my_sqrt(x=4) __exit__(self=<__main__.CallCarver object at 0x10bd73e10>, exc_type=None, exc_value=None, tb=None) ###Markdown This is a syntax we can directly use to invoke `my_sqrt()` again: ###Code eval("my_sqrt(x=2)") ###Output _____no_output_____ ###Markdown Carving urlparse() What happens if we apply this to `webbrowser()`? ###Code with CallCarver() as webbrowser_carver: webbrowser("http://www.example.com") ###Output _____no_output_____ ###Markdown We see that retrieving a URL from the Web requires quite some functionality: ###Code function_list = webbrowser_carver.called_functions(qualified=True) len(function_list) print(function_list[:50]) ###Output ['requests.api.get', 'requests.api.request', 'requests.sessions.__init__', 'requests.utils.default_headers', 'requests.utils.default_user_agent', 'requests.structures.__init__', 'collections.abc.update', 'abc.__instancecheck__', '_weakrefset.__contains__', 'requests.structures.__setitem__', 'requests.hooks.default_hooks', 'requests.hooks.<dictcomp>', 'requests.cookies.cookiejar_from_dict', 'http.cookiejar.__init__', 'threading.RLock', 'http.cookiejar.__iter__', 'requests.cookies.<listcomp>', 'http.cookiejar.deepvalues', 'http.cookiejar.vals_sorted_by_key', 'requests.adapters.__init__', 'urllib3.util.retry.__init__', 'requests.adapters.init_poolmanager', 'urllib3.poolmanager.__init__', 'urllib3.request.__init__', 'urllib3._collections.__init__', 'requests.sessions.mount', 'requests.sessions.<listcomp>', 'requests.sessions.__enter__', 'requests.sessions.request', 'requests.models.__init__', 'requests.sessions.prepare_request', 'requests.cookies.merge_cookies', 'requests.cookies.update', 'requests.utils.get_netrc_auth', 'posixpath.expanduser', 'posixpath._get_sep', 'collections.abc.__contains__', 'os.__getitem__', 'os.encode', 'os.decode', 'genericpath.exists', 'urllib.parse.urlparse', 'urllib.parse._coerce_args', 'urllib.parse.urlsplit', 'urllib.parse._splitnetloc', 'urllib.parse._noop', 'netrc.__init__', '_bootlocale.getpreferredencoding', 'codecs.__init__', 'netrc._parse'] ###Markdown Among several other functions, we also have a call to `urlparse()`: ###Code urlparse_argument_list = webbrowser_carver.arguments("urllib.parse.urlparse") urlparse_argument_list ###Output _____no_output_____ ###Markdown Again, we can convert this into a well-formatted call: ###Code urlparse_call = simple_call_string("urlparse", urlparse_argument_list[0]) urlparse_call ###Output _____no_output_____ ###Markdown Again, we can re-execute this call: ###Code eval(urlparse_call) ###Output _____no_output_____ ###Markdown We now have successfully carved the call to `urlparse()` out of the `webbrowser()` execution. Replaying Calls Replaying calls in their entirety and in all generality is tricky, as there are several challenges to be addressed. These include:1. We need to be able to _access_ individual functions. If we access a function by name, the name must be in scope. If the name is not visible (for instance, because it is a name internal to the module), we must make it visible.2. Any _resources_ accessed outside of arguments must be recorded and reconstructed for replay as well. This can be difficult if variables refer to external resources such as files or network resources.3. _Complex objects_ must be reconstructed as well.These constraints make carving hard or even impossible if the function to be tested interacts heavily with its environment. To illustrate these issues, consider the `email.parser.parse()` method that is invoked in `webbrowser()`: ###Code email_parse_argument_list = webbrowser_carver.arguments("email.parser.parse") ###Output _____no_output_____ ###Markdown Calls to this method look like this: ###Code email_parse_call = simple_call_string( "email.parser.parse", email_parse_argument_list[0]) email_parse_call ###Output _____no_output_____ ###Markdown We see that `email.parser.parse()` is part of a `email.parser.Parser` object and it gets a `StringIO` object. Both are non-primitive values. How could we possibly reconstruct them? Serializing ObjectsThe answer to the problem of complex objects lies in creating a _persistent_ representation that can be _reconstructed_ at later points in time. This process is known as _serialization_; in Python, it is also known as _pickling_. The `pickle` module provides means to create a serialized representation of an object. Let us apply this on the `email.parser.Parser` object we just found: ###Code import pickle parser_object = email_parse_argument_list[0][0][1] parser_object pickled = pickle.dumps(parser_object) pickled ###Output _____no_output_____ ###Markdown From this string representing the serialized `email.parser.Parser` object, we can recreate the Parser object at any time: ###Code unpickled_parser_object = pickle.loads(pickled) unpickled_parser_object ###Output _____no_output_____ ###Markdown The serialization mechanism allows us to produce a representation for all objects passed as parameters (assuming they can be pickled, that is). We can now extend the `simple_call_string()` function such that it automatically pickles objects. Additionally, we set it up such that if the first parameter is named `self` (i.e., it is a class method), we make it a method of the `self` object. ###Code def call_value(value): value_as_string = repr(value) if value_as_string.find('<') >= 0: # Complex object value_as_string = "pickle.loads(" + repr(pickle.dumps(value)) + ")" return value_as_string def call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string, pickling complex objects""" if len(argument_list) > 0: (first_var, first_value) = argument_list[0] if first_var == "self": # Make this a method call method_name = function_name.split(".")[-1] function_name = call_value(first_value) + "." + method_name argument_list = argument_list[1:] return function_name + "(" + \ ", ".join([var + "=" + call_value(value) for (var, value) in argument_list]) + ")" ###Output _____no_output_____ ###Markdown Let us apply the extended `call_string()` method to create a call for `email.parser.parse()`, including pickled objects: ###Code call = call_string("email.parser.parse", email_parse_argument_list[0]) print(call) ###Output pickle.loads(b'\x80\x03cemail.parser\nParser\nq\x00)\x81q\x01}q\x02(X\x06\x00\x00\x00_classq\x03chttp.client\nHTTPMessage\nq\x04X\x06\x00\x00\x00policyq\x05cemail._policybase\nCompat32\nq\x06)\x81q\x07ub.').parse(fp=pickle.loads(b'\x80\x03c_io\nStringIO\nq\x00)\x81q\x01(XX\x01\x00\x00Content-Encoding: gzip\r\nAccept-Ranges: bytes\r\nCache-Control: max-age=604800\r\nContent-Type: text/html; charset=UTF-8\r\nDate: Mon, 09 Sep 2019 15:01:07 GMT\r\nEtag: "1541025663"\r\nExpires: Mon, 16 Sep 2019 15:01:07 GMT\r\nLast-Modified: Fri, 09 Aug 2013 23:54:35 GMT\r\nServer: ECS (dcb/7F13)\r\nVary: Accept-Encoding\r\nX-Cache: HIT\r\nContent-Length: 606\r\n\r\nq\x02X\x01\x00\x00\x00\nq\x03MX\x01Ntq\x04b.'), headersonly=False) ###Markdown With this call involvimng the pickled object, we can now re-run the original call and obtain a valid result: ###Code eval(call) ###Output _____no_output_____ ###Markdown All CallsSo far, we have seen only one call of `webbrowser()`. How many of the calls within `webbrowser()` can we actually carve and replay? Let us try this out and compute the numbers. ###Code import traceback import enum import socket all_functions = set(webbrowser_carver.called_functions(qualified=True)) call_success = set() run_success = set() exceptions_seen = set() for function_name in webbrowser_carver.called_functions(qualified=True): for argument_list in webbrowser_carver.arguments(function_name): try: call = call_string(function_name, argument_list) call_success.add(function_name) result = eval(call) run_success.add(function_name) except Exception as exc: exceptions_seen.add(repr(exc)) # print("->", call, file=sys.stderr) # traceback.print_exc() # print("", file=sys.stderr) continue print("%d/%d calls (%.2f%%) successfully created and %d/%d calls (%.2f%%) successfully ran" % ( len(call_success), len(all_functions), len( call_success) * 100 / len(all_functions), len(run_success), len(all_functions), len(run_success) * 100 / len(all_functions))) ###Output 241/304 calls (79.28%) successfully created and 99/304 calls (32.57%) successfully ran ###Markdown About half of the calls succeed. Let us take a look into some of the error messages we get: ###Code for i in range(10): print(list(exceptions_seen)[i]) ###Output TypeError('module.__new__(): not enough arguments',) TypeError('__contains__() takes no keyword arguments',) SyntaxError('keyword argument repeated', ('<string>', 1, 43, None)) SyntaxError('keyword argument repeated', ('<string>', 1, 98, None)) NameError("name 're' is not defined",) TypeError('get() takes no keyword arguments',) TypeError('Cannot serialize socket object',) TypeError("'NoneType' object is not callable",) PicklingError("Can't pickle <class 'odict_values'>: attribute lookup odict_values on builtins failed",) SyntaxError('invalid syntax', ('<string>', 1, 21, "requests.structures.<genexpr>(.0=pickle.loads(b'\\x80\\x03cbuiltins\\niter\\nq\\x00]q\\x01\\x85q\\x02Rq\\x03.'))")) ###Markdown We see that:* **A large majority of calls could be converted into call strings.** If this is not the case, this is mostly due to having unserialized objects being passed.* **About half of the calls could be executed.** The error messages for the failing runs are varied; the most frequent being that some internal name is invoked that is not in scope. Our carving mechanism should be taken with a grain of salt: We still do not cover the situation where external variables and values (such as global variables) are being accessed, and the serialization mechanism cannot recreate external resources. Still, if the function of interest falls among those that _can_ be carved and replayed, we can very effectively re-run its calls with their original arguments. Mining API Grammars from Carved CallsSo far, we have used carved calls to replay exactly the same invocations as originally encountered. However, we can also _mutate_ carved calls to effectively fuzz APIs with previously recorded arguments.The general idea is as follows:1. First, we record all calls of a specific function from a given execution of the program.2. Second, we create a grammar that incorporates all these calls, with separate rules for each argument and alternatives for each value found; this allows us to produce calls that arbitrarily _recombine_ these arguments.Let us explore these steps in the following sections. From Calls to GrammarsLet us start with an example. The `power(x, y)` function returns $x^y$; it is but a wrapper around the equivalent `math.pow()` function. (Since `power()` is defined in Python, we can trace it – in contrast to `math.pow()`, which is implemented in C.) ###Code import math def power(x, y): return math.pow(x, y) ###Output _____no_output_____ ###Markdown Let us invoke `power()` while recording its arguments: ###Code with CallCarver() as power_carver: z = power(1, 2) z = power(3, 4) power_carver.arguments("power") ###Output _____no_output_____ ###Markdown From this list of recorded arguments, we could now create a grammar for the `power()` call, with `x` and `y` expanding into the values seen: ###Code from Grammars import START_SYMBOL, is_valid_grammar, new_symbol, extend_grammar POWER_GRAMMAR = { "<start>": ["power(<x>, <y>)"], "<x>": ["1", "3"], "<y>": ["2", "4"] } assert is_valid_grammar(POWER_GRAMMAR) ###Output _____no_output_____ ###Markdown When fuzzing with this grammar, we then get arbitrary combinations of `x` and `y`; aiming for coverage will ensure that all values are actually tested at least once: ###Code from GrammarCoverageFuzzer import GrammarCoverageFuzzer power_fuzzer = GrammarCoverageFuzzer(POWER_GRAMMAR) [power_fuzzer.fuzz() for i in range(5)] ###Output _____no_output_____ ###Markdown What we need is a method to automatically convert the arguments as seen in `power_carver` to the grammar as seen in `POWER_GRAMMAR`. This is what we define in the next section. A Grammar Miner for CallsWe introduce a class `CallGrammarMiner`, which, given a `Carver`, automatically produces a grammar from the calls seen. To initialize, we pass the carver object: ###Code class CallGrammarMiner(object): def __init__(self, carver, log=False): self.carver = carver self.log = log ###Output _____no_output_____ ###Markdown Initial GrammarThe initial grammar produces a single call. The possible `` expansions are to be constructed later: ###Code import copy class CallGrammarMiner(CallGrammarMiner): CALL_SYMBOL = "<call>" def initial_grammar(self): return extend_grammar( {START_SYMBOL: [self.CALL_SYMBOL], self.CALL_SYMBOL: [] }) m = CallGrammarMiner(power_carver) initial_grammar = m.initial_grammar() initial_grammar ###Output _____no_output_____ ###Markdown A Grammar from ArgumentsLet us start by creating a grammar from a list of arguments. The method `mine_arguments_grammar()` creates a grammar for the arguments seen during carving, such as these: ###Code arguments = power_carver.arguments("power") arguments ###Output _____no_output_____ ###Markdown The `mine_arguments_grammar()` method iterates through the variables seen and creates a mapping `variables` of variable names to a set of values seen (as strings, going through `call_value()`). In a second step, it then creates a grammar with a rule for each variable name, expanding into the values seen. ###Code class CallGrammarMiner(CallGrammarMiner): def var_symbol(self, function_name, var, grammar): return new_symbol(grammar, "<" + function_name + "-" + var + ">") def mine_arguments_grammar(self, function_name, arguments, grammar): var_grammar = {} variables = {} for argument_list in arguments: for (var, value) in argument_list: value_string = call_value(value) if self.log: print(var, "=", value_string) if value_string.find("<") >= 0: var_grammar["<langle>"] = ["<"] value_string = value_string.replace("<", "<langle>") if var not in variables: variables[var] = set() variables[var].add(value_string) var_symbols = [] for var in variables: var_symbol = self.var_symbol(function_name, var, grammar) var_symbols.append(var_symbol) var_grammar[var_symbol] = list(variables[var]) return var_grammar, var_symbols m = CallGrammarMiner(power_carver) var_grammar, var_symbols = m.mine_arguments_grammar( "power", arguments, initial_grammar) var_grammar ###Output _____no_output_____ ###Markdown The additional return value `var_symbols` is a list of argument symbols in the call: ###Code var_symbols ###Output _____no_output_____ ###Markdown A Grammar from CallsTo get the grammar for a single function (`mine_function_grammar()`), we add a call to the function: ###Code class CallGrammarMiner(CallGrammarMiner): def function_symbol(self, function_name, grammar): return new_symbol(grammar, "<" + function_name + ">") def mine_function_grammar(self, function_name, grammar): arguments = self.carver.arguments(function_name) if self.log: print(function_name, arguments) var_grammar, var_symbols = self.mine_arguments_grammar( function_name, arguments, grammar) function_grammar = var_grammar function_symbol = self.function_symbol(function_name, grammar) if len(var_symbols) > 0 and var_symbols[0].find("-self") >= 0: # Method call function_grammar[function_symbol] = [ var_symbols[0] + "." + function_name + "(" + ", ".join(var_symbols[1:]) + ")"] else: function_grammar[function_symbol] = [ function_name + "(" + ", ".join(var_symbols) + ")"] if self.log: print(function_symbol, "::=", function_grammar[function_symbol]) return function_grammar, function_symbol m = CallGrammarMiner(power_carver) function_grammar, function_symbol = m.mine_function_grammar( "power", initial_grammar) function_grammar ###Output _____no_output_____ ###Markdown The additionally returned `function_symbol` holds the name of the function call just added: ###Code function_symbol ###Output _____no_output_____ ###Markdown A Grammar from all CallsLet us now repeat the above for all function calls seen during carving. To this end, we simply iterate over all function calls seen: ###Code power_carver.called_functions() class CallGrammarMiner(CallGrammarMiner): def mine_call_grammar(self, function_list=None, qualified=False): grammar = self.initial_grammar() fn_list = function_list if function_list is None: fn_list = self.carver.called_functions(qualified=qualified) for function_name in fn_list: if function_list is None and (function_name.startswith("_") or function_name.startswith("<")): continue # Internal function # Ignore errors with mined functions try: function_grammar, function_symbol = self.mine_function_grammar( function_name, grammar) except: if function_list is not None: raise if function_symbol not in grammar[self.CALL_SYMBOL]: grammar[self.CALL_SYMBOL].append(function_symbol) grammar.update(function_grammar) assert is_valid_grammar(grammar) return grammar ###Output _____no_output_____ ###Markdown The method `mine_call_grammar()` is the one that clients can and should use – first for mining... ###Code m = CallGrammarMiner(power_carver) power_grammar = m.mine_call_grammar() power_grammar ###Output _____no_output_____ ###Markdown ...and then for fuzzing: ###Code power_fuzzer = GrammarCoverageFuzzer(power_grammar) [power_fuzzer.fuzz() for i in range(5)] ###Output _____no_output_____ ###Markdown With this, we have successfully extracted a grammar from a recorded execution; in contrast to "simple" carving, our grammar allows us to _recombine_ arguments and thus to fuzz at the API level. Fuzzing Web FunctionsLet us now apply our grammar miner on a larger API – the `urlparse()` function we already encountered during carving. ###Code with CallCarver() as webbrowser_carver: webbrowser("https://www.fuzzingbook.org") webbrowser("http://www.example.com") ###Output _____no_output_____ ###Markdown We can mine a grammar from the calls encountered: ###Code m = CallGrammarMiner(webbrowser_carver) webbrowser_grammar = m.mine_call_grammar() ###Output _____no_output_____ ###Markdown This is a rather large grammar: ###Code call_list = webbrowser_grammar['<call>'] len(call_list) print(call_list[:20]) ###Output ['<webbrowser>', '<default_headers>', '<default_user_agent>', '<update>', '<default_hooks>', '<cookiejar_from_dict>', '<RLock>', '<deepvalues>', '<vals_sorted_by_key>', '<init_poolmanager>', '<mount>', '<prepare_request>', '<merge_cookies>', '<get_netrc_auth>', '<expanduser>', '<encode>', '<decode>', '<exists>', '<urlparse>', '<urlsplit>'] ###Markdown Here's the rule for the `urlsplit()` function: ###Code webbrowser_grammar["<urlsplit>"] ###Output _____no_output_____ ###Markdown Here are the arguments. Note that although we only passed `http://www.fuzzingbook.org` as a parameter, we also see the `https:` variant. That is because opening the `http:` URL automatically redirects to the `https:` URL, which is then also processed by `urlsplit()`. ###Code webbrowser_grammar["<urlsplit-url>"] ###Output _____no_output_____ ###Markdown There also is some variation in the `scheme` argument: ###Code webbrowser_grammar["<urlsplit-scheme>"] ###Output _____no_output_____ ###Markdown If we now apply a fuzzer on these rules, we systematically cover all variations of arguments seen, including, of course, combinations not seen during carving. Again, we are fuzzing at the API level here. ###Code urlsplit_fuzzer = GrammarCoverageFuzzer( webbrowser_grammar, start_symbol="<urlsplit>") for i in range(5): print(urlsplit_fuzzer.fuzz()) ###Output urlsplit('http://www.example.com/', '', True) urlsplit('https://www.fuzzingbook.org', '', True) urlsplit('http://www.example.com', '', True) urlsplit('https://www.fuzzingbook.org/', '', True) urlsplit('http://www.example.com', '', True) ###Markdown Just as seen with carving, running tests at the API level is orders of magnitude faster than executing system tests. Hence, this calls for means to fuzz at the method level: ###Code from urllib.parse import urlsplit from Timer import Timer with Timer() as urlsplit_timer: urlsplit('http://www.fuzzingbook.org/', 'http', True) urlsplit_timer.elapsed_time() with Timer() as webbrowser_timer: webbrowser("http://www.fuzzingbook.org") webbrowser_timer.elapsed_time() webbrowser_timer.elapsed_time() / urlsplit_timer.elapsed_time() ###Output _____no_output_____ ###Markdown But then again, the caveats encountered during carving apply, notably the requirement to recreate the original function environment. If we also alter or recombine arguments, we get the additional risk of _violating an implicit precondition_ – that is, invoking a function with arguments the function was never designed for. Such _false alarms_, resulting from incorrect invocations rather than incorrect implementations, must then be identified (typically manually) and wed out (for instance, by altering or constraining the grammar). The huge speed gains at the API level, however, may well justify this additional investment. SynopsisThis chapter provides means to _record and replay function calls_ during a system test. Since individual function calls are much faster than a whole system run, such "carving" mechanisms have the potential to run tests much faster. Recording CallsThe `CallCarver` class records all calls occurring while it is active. It is used in conjunction with a `with` clause: ###Code with CallCarver() as carver: y = my_sqrt(2) y = my_sqrt(4) ###Output _____no_output_____ ###Markdown After execution, `called_functions()` lists the names of functions encountered: ###Code carver.called_functions() ###Output _____no_output_____ ###Markdown The `arguments()` method lists the arguments recorded for a function. This is a mapping of the function name to a list of lists of arguments; each argument is a pair (parameter name, value). ###Code carver.arguments('my_sqrt') ###Output _____no_output_____ ###Markdown Complex arguments are properly serialized, such that they can be easily restored. Synthesizing CallsWhile such recorded arguments already could be turned into arguments and calls, a much nicer alternative is to create a _grammar_ for recorded calls. This allows to synthesize arbitrary _combinations_ of arguments, and also offers a base for further customization of calls. The `CallGrammarMiner` class turns a list of carved executions into a grammar. ###Code my_sqrt_miner = CallGrammarMiner(carver) my_sqrt_grammar = my_sqrt_miner.mine_call_grammar() my_sqrt_grammar ###Output _____no_output_____ ###Markdown This grammar can be used to synthesize calls. ###Code fuzzer = GrammarCoverageFuzzer(my_sqrt_grammar) fuzzer.fuzz() ###Output _____no_output_____ ###Markdown These calls can be executed in isolation, effectively extracting unit tests from system tests: ###Code eval(fuzzer.fuzz()) ###Output _____no_output_____ ###Markdown Lessons Learned* _Carving_ allows for effective replay of function calls recorded during a system test.* A function call can be _orders of magnitude faster_ than a system invocation.* _Serialization_ allows to create persistent representations of complex objects.* Functions that heavily interact with their environment and/or access external resources are difficult to carve.* From carved calls, one can produce API grammars that arbitrarily combine carved arguments. Next StepsIn the next chapter, we will discuss [how to reduce failure-inducing inputs](Reducer.ipynb). BackgroundCarving was invented by Elbaum et al. \cite{Elbaum2006} and originally implemented for Java. In this chapter, we follow several of their design choices (including recording and serializing method arguments only).The combination of carving and fuzzing at the API level is described in \cite{Kampmann2018}. Exercises Exercise 1: Carving for Regression TestingSo far, during carving, we only have looked into reproducing _calls_, but not into actually checking the _results_ of these calls. This is important for _regression testing_ – i.e. checking whether a change to code does not impede existing functionality. We can build this by recording not only _calls_, but also _return values_ – and then later compare whether the same calls result in the same values. This may not work on all occasions; values that depend on time, randomness, or other external factors may be different. Still, for functionality that abstracts from these details, checking that nothing has changed is an important part of testing. Our aim is to design a class `ResultCarver` that extends `CallCarver` by recording both calls and return values.In a first step, create a `traceit()` method that also tracks return values by extending the `traceit()` method. The `traceit()` event type is `"return"` and the `arg` parameter is the returned value. Here is a prototype that only prints out the returned values: ###Code class ResultCarver(CallCarver): def traceit(self, frame, event, arg): if event == "return": if self._log: print("Result:", arg) super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) ###Output my_sqrt(x=2) Result: 1.414213562373095 __exit__(self=<__main__.ResultCarver object at 0x1a1cbf41d0>, exc_type=None, exc_value=None, tb=None) ###Markdown Part 1: Store function resultsExtend the above code such that results are _stored_ in a way that associates them with the currently returning function (or method). To this end, you need to keep track of the _current stack of called functions_. **Solution.** Here's a solution, building on the above: ###Code class ResultCarver(CallCarver): def reset(self): super().reset() self._call_stack = [] self._results = {} def add_result(self, function_name, arguments, result): key = simple_call_string(function_name, arguments) self._results[key] = result def traceit(self, frame, event, arg): if event == "call": code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) self._call_stack.append( (function_name, qualified_name, get_arguments(frame))) if event == "return": result = arg (function_name, qualified_name, arguments) = self._call_stack.pop() self.add_result(function_name, arguments, result) if function_name != qualified_name: self.add_result(qualified_name, arguments, result) if self._log: print( simple_call_string( function_name, arguments), "=", result) # Keep on processing current calls super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) result_carver._results ###Output my_sqrt(x=2) my_sqrt(x=2) = 1.414213562373095 __exit__(self=<__main__.ResultCarver object at 0x1a1cbf4748>, exc_type=None, exc_value=None, tb=None) ###Markdown Part 2: Access results Give it a method `result()` that returns the value recorded for that particular function name and result:```pythonclass ResultCarver(CallCarver): def result(self, function_name, argument): """Returns the result recorded for function_name(argument"""``` **Solution.** This is mostly done in the code for part 1: ###Code class ResultCarver(ResultCarver): def result(self, function_name, argument): key = simple_call_string(function_name, arguments) return self._results[key] ###Output _____no_output_____ ###Markdown Part 3: Produce assertionsFor the functions called during `webbrowser()` execution, create a set of _assertions_ that check whether the result returned is still the same. Test this for `urllib.parse.urlparse()` and `urllib.parse.urlsplit()`. **Solution.** Not too hard now: ###Code with ResultCarver() as webbrowser_result_carver: webbrowser("http://www.example.com") for function_name in ["urllib.parse.urlparse", "urllib.parse.urlsplit"]: for arguments in webbrowser_result_carver.arguments(function_name): try: call = call_string(function_name, arguments) result = webbrowser_result_carver.result(function_name, arguments) print("assert", call, "==", call_value(result)) except Exception: continue ###Output assert urllib.parse.urlparse(url='http://www.example.com', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') ###Markdown We can run these assertions: ###Code from urllib.parse import SplitResult, ParseResult, urlparse, urlsplit assert urlparse( url='http://www.example.com', scheme='', allow_fragments=True) == ParseResult( scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urlsplit( url='http://www.example.com', scheme='', allow_fragments=True) == SplitResult( scheme='http', netloc='www.example.com', path='', query='', fragment='') ###Output _____no_output_____ ###Markdown Carving Unit TestsSo far, we have always generated _system input_, i.e. data that the program as a whole obtains via its input channels. If we are interested in testing only a small set of functions, having to go through the system can be very inefficient. This chapter introduces a technique known as _carving_, which, given a system test, automatically extracts a set of _unit tests_ that replicate the calls seen during the unit test. The key idea is to _record_ such calls such that we can _replay_ them later – as a whole or selectively. On top, we also explore how to synthesize API grammars from carved unit tests; this means that we can _synthesize API tests without having to write a grammar at all._ **Prerequisites*** Carving makes use of dynamic traces of function calls and variables, as introduced in the [chapter on configuration fuzzing](ConfigurationFuzzer.ipynb).* Using grammars to test units was introduced in the [chapter on API fuzzing](APIFuzzer.ipynb). ###Code import bookutils import APIFuzzer ###Output _____no_output_____ ###Markdown SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.Carver import ```and then make use of the following features.This chapter provides means to _record and replay function calls_ during a system test. Since individual function calls are much faster than a whole system run, such "carving" mechanisms have the potential to run tests much faster. Recording CallsThe `CallCarver` class records all calls occurring while it is active. It is used in conjunction with a `with` clause:```python>>> with CallCarver() as carver:>>> y = my_sqrt(2)>>> y = my_sqrt(4)```After execution, `called_functions()` lists the names of functions encountered:```python>>> carver.called_functions()['my_sqrt', '__exit__']```The `arguments()` method lists the arguments recorded for a function. This is a mapping of the function name to a list of lists of arguments; each argument is a pair (parameter name, value).```python>>> carver.arguments('my_sqrt')[[('x', 2)], [('x', 4)]]```Complex arguments are properly serialized, such that they can be easily restored. Synthesizing CallsWhile such recorded arguments already could be turned into arguments and calls, a much nicer alternative is to create a _grammar_ for recorded calls. This allows to synthesize arbitrary _combinations_ of arguments, and also offers a base for further customization of calls.The `CallGrammarMiner` class turns a list of carved executions into a grammar.```python>>> my_sqrt_miner = CallGrammarMiner(carver)>>> my_sqrt_grammar = my_sqrt_miner.mine_call_grammar()>>> my_sqrt_grammar{'': [''], '': [''], '': ['4', '2'], '': ['my_sqrt()']}```This grammar can be used to synthesize calls.```python>>> fuzzer = GrammarCoverageFuzzer(my_sqrt_grammar)>>> fuzzer.fuzz()'my_sqrt(2)'```These calls can be executed in isolation, effectively extracting unit tests from system tests:```python>>> eval(fuzzer.fuzz())2.0``` System Tests vs Unit TestsRemember the URL grammar introduced for [grammar fuzzing](Grammars.ipynb)? With such a grammar, we can happily test a Web browser again and again, checking how it reacts to arbitrary page requests.Let us define a very simple "web browser" that goes and downloads the content given by the URL. ###Code import urllib.parse def webbrowser(url): """Download the http/https resource given by the URL""" import requests # Only import if needed r = requests.get(url) return r.text ###Output _____no_output_____ ###Markdown Let us apply this on [fuzzingbook.org](https://www.fuzzingbook.org/) and measure the time, using the [Timer class](Timer.ipynb): ###Code from Timer import Timer with Timer() as webbrowser_timer: fuzzingbook_contents = webbrowser( "http://www.fuzzingbook.org/html/Fuzzer.html") print("Downloaded %d bytes in %.2f seconds" % (len(fuzzingbook_contents), webbrowser_timer.elapsed_time())) fuzzingbook_contents[:100] ###Output _____no_output_____ ###Markdown A full webbrowser, of course, would also render the HTML content. We can achieve this using these commands (but we don't, as we do not want to replicate the entire Web page here):```pythonfrom IPython.display import HTML, displayHTML(fuzzingbook_contents)``` Having to start a whole browser (or having it render a Web page) again and again means lots of overhead, though – in particular if we want to test only a subset of its functionality. In particular, after a change in the code, we would prefer to test only the subset of functions that is affected by the change, rather than running the well-tested functions again and again. Let us assume we change the function that takes care of parsing the given URL and decomposing it into the individual elements – the scheme ("http"), the network location (`"www.fuzzingbook.com"`), or the path (`"/html/Fuzzer.html"`). This function is named `urlparse()`: ###Code from urllib.parse import urlparse urlparse('https://www.fuzzingbook.com/html/Carver.html') ###Output _____no_output_____ ###Markdown You see how the individual elements of the URL – the _scheme_ (`"http"`), the _network location_ (`"www.fuzzingbook.com"`), or the path (`"//html/Carver.html"`) are all properly identified. Other elements (like `params`, `query`, or `fragment`) are empty, because they were not part of our input. The interesting thing is that executing only `urlparse()` is orders of magnitude faster than running all of `webbrowser()`. Let us measure the factor: ###Code runs = 1000 with Timer() as urlparse_timer: for i in range(runs): urlparse('https://www.fuzzingbook.com/html/Carver.html') avg_urlparse_time = urlparse_timer.elapsed_time() / 1000 avg_urlparse_time ###Output _____no_output_____ ###Markdown Compare this to the time required by the webbrowser ###Code webbrowser_timer.elapsed_time() ###Output _____no_output_____ ###Markdown The difference in time is huge: ###Code webbrowser_timer.elapsed_time() / avg_urlparse_time ###Output _____no_output_____ ###Markdown Hence, in the time it takes to run `webbrowser()` once, we can have _tens of thousands_ of executions of `urlparse()` – and this does not even take into account the time it takes the browser to render the downloaded HTML, to run the included scripts, and whatever else happens when a Web page is loaded. Hence, strategies that allow us to test at the _unit_ level are very promising as they can save lots of overhead. Carving Unit TestsTesting methods and functions at the unit level requires a very good understanding of the individual units to be tested as well as their interplay with other units. Setting up an appropriate infrastructure and writing unit tests by hand thus is demanding, yet rewarding. There is, however, an interesting alternative to writing unit tests by hand. The technique of _carving_ automatically _converts system tests into unit tests_ by means of recording and replaying function calls:1. During a system test (given or generated), we _record_ all calls into a function, including all arguments and other variables the function reads.2. From these, we synthesize a self-contained _unit test_ that reconstructs the function call with all arguments.3. This unit test can be executed (replayed) at any time with high efficiency.In the remainder of this chapter, let us explore these steps. Recording CallsOur first challenge is to record function calls together with their arguments. (In the interest of simplicity, we restrict ourself to arguments, ignoring any global variables or other non-arguments that are read by the function.) To record calls and arguments, we use the mechanism [we introduced for coverage](Coverage.ipynb): By setting up a tracer function, we track all calls into individual functions, also saving their arguments. Just like `Coverage` objects, we want to use `Carver` objects to be able to be used in conjunction with the `with` statement, such that we can trace a particular code block:```pythonwith Carver() as carver: function_to_be_traced()c = carver.calls()```The initial definition supports this construct: \todo{Get tracker from [dynamic invariants](DynamicInvariants.ipynb)} ###Code import sys class Carver(object): def __init__(self, log=False): self._log = log self.reset() def reset(self): self._calls = {} # Start of `with` block def __enter__(self): self.original_trace_function = sys.gettrace() sys.settrace(self.traceit) return self # End of `with` block def __exit__(self, exc_type, exc_value, tb): sys.settrace(self.original_trace_function) ###Output _____no_output_____ ###Markdown The actual work takes place in the `traceit()` method, which records all calls in the `_calls` attribute. First, we define two helper functions: ###Code import inspect def get_qualified_name(code): """Return the fully qualified name of the current function""" name = code.co_name module = inspect.getmodule(code) if module is not None: name = module.__name__ + "." + name return name def get_arguments(frame): """Return call arguments in the given frame""" # When called, all arguments are local variables arguments = [(var, frame.f_locals[var]) for var in frame.f_locals] arguments.reverse() # Want same order as call return arguments class CallCarver(Carver): def add_call(self, function_name, arguments): """Add given call to list of calls""" if function_name not in self._calls: self._calls[function_name] = [] self._calls[function_name].append(arguments) # Tracking function: Record all calls and all args def traceit(self, frame, event, arg): if event != "call": return None code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) arguments = get_arguments(frame) self.add_call(function_name, arguments) if qualified_name != function_name: self.add_call(qualified_name, arguments) if self._log: print(simple_call_string(function_name, arguments)) return None ###Output _____no_output_____ ###Markdown Finally, we need some convenience functions to access the calls: ###Code class CallCarver(CallCarver): def calls(self): """Return a dictionary of all calls traced.""" return self._calls def arguments(self, function_name): """Return a list of all arguments of the given function as (VAR, VALUE) pairs. Raises an exception if the function was not traced.""" return self._calls[function_name] def called_functions(self, qualified=False): """Return all functions called.""" if qualified: return [function_name for function_name in self._calls.keys() if function_name.find('.') >= 0] else: return [function_name for function_name in self._calls.keys() if function_name.find('.') < 0] ###Output _____no_output_____ ###Markdown Recording my_sqrt() Let's try out our new `Carver` class – first on a very simple function: ###Code from Intro_Testing import my_sqrt with CallCarver() as sqrt_carver: my_sqrt(2) my_sqrt(4) ###Output _____no_output_____ ###Markdown We can retrieve all calls seen... ###Code sqrt_carver.calls() sqrt_carver.called_functions() ###Output _____no_output_____ ###Markdown ... as well as the arguments of a particular function: ###Code sqrt_carver.arguments("my_sqrt") ###Output _____no_output_____ ###Markdown We define a convenience function for nicer printing of these lists: ###Code def simple_call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string""" return function_name + "(" + \ ", ".join([var + "=" + repr(value) for (var, value) in argument_list]) + ")" for function_name in sqrt_carver.called_functions(): for argument_list in sqrt_carver.arguments(function_name): print(simple_call_string(function_name, argument_list)) ###Output my_sqrt(x=2) my_sqrt(x=4) __exit__(self=<__main__.CallCarver object at 0x7fca8c061eb8>, exc_type=None, exc_value=None, tb=None) ###Markdown This is a syntax we can directly use to invoke `my_sqrt()` again: ###Code eval("my_sqrt(x=2)") ###Output _____no_output_____ ###Markdown Carving urlparse() What happens if we apply this to `webbrowser()`? ###Code with CallCarver() as webbrowser_carver: webbrowser("http://www.example.com") ###Output _____no_output_____ ###Markdown We see that retrieving a URL from the Web requires quite some functionality: ###Code function_list = webbrowser_carver.called_functions(qualified=True) len(function_list) print(function_list[:50]) ###Output ['requests.api.get', 'requests.api.request', 'requests.sessions.__init__', 'requests.utils.default_headers', 'requests.utils.default_user_agent', 'requests.structures.__init__', 'collections.abc.update', 'abc.__instancecheck__', '_weakrefset.__contains__', 'requests.structures.__setitem__', 'requests.hooks.default_hooks', 'requests.hooks.<dictcomp>', 'requests.cookies.cookiejar_from_dict', 'http.cookiejar.__init__', 'threading.RLock', 'http.cookiejar.__iter__', 'requests.cookies.<listcomp>', 'http.cookiejar.deepvalues', 'http.cookiejar.vals_sorted_by_key', 'requests.adapters.__init__', 'urllib3.util.retry.__init__', 'urllib3.util.retry.<listcomp>', 'requests.adapters.init_poolmanager', 'urllib3.poolmanager.__init__', 'urllib3.request.__init__', 'urllib3._collections.__init__', 'requests.sessions.mount', 'requests.sessions.<listcomp>', 'requests.sessions.__enter__', 'requests.sessions.request', 'requests.models.__init__', 'requests.sessions.prepare_request', 'requests.cookies.merge_cookies', 'requests.cookies.update', 'requests.utils.get_netrc_auth', 'posixpath.expanduser', 'posixpath._get_sep', 'collections.abc.__contains__', 'os.__getitem__', 'os.encode', 'os.decode', 'genericpath.exists', 'urllib.parse.urlparse', 'urllib.parse._coerce_args', 'urllib.parse.urlsplit', 'urllib.parse._splitnetloc', 'urllib.parse._checknetloc', 'urllib.parse.<genexpr>', 'urllib.parse._noop', 'netrc.__init__'] ###Markdown Among several other functions, we also have a call to `urlparse()`: ###Code urlparse_argument_list = webbrowser_carver.arguments("urllib.parse.urlparse") urlparse_argument_list ###Output _____no_output_____ ###Markdown Again, we can convert this into a well-formatted call: ###Code urlparse_call = simple_call_string("urlparse", urlparse_argument_list[0]) urlparse_call ###Output _____no_output_____ ###Markdown Again, we can re-execute this call: ###Code eval(urlparse_call) ###Output _____no_output_____ ###Markdown We now have successfully carved the call to `urlparse()` out of the `webbrowser()` execution. Replaying Calls Replaying calls in their entirety and in all generality is tricky, as there are several challenges to be addressed. These include:1. We need to be able to _access_ individual functions. If we access a function by name, the name must be in scope. If the name is not visible (for instance, because it is a name internal to the module), we must make it visible.2. Any _resources_ accessed outside of arguments must be recorded and reconstructed for replay as well. This can be difficult if variables refer to external resources such as files or network resources.3. _Complex objects_ must be reconstructed as well.These constraints make carving hard or even impossible if the function to be tested interacts heavily with its environment. To illustrate these issues, consider the `email.parser.parse()` method that is invoked in `webbrowser()`: ###Code email_parse_argument_list = webbrowser_carver.arguments("email.parser.parse") ###Output _____no_output_____ ###Markdown Calls to this method look like this: ###Code email_parse_call = simple_call_string( "email.parser.parse", email_parse_argument_list[0]) email_parse_call ###Output _____no_output_____ ###Markdown We see that `email.parser.parse()` is part of a `email.parser.Parser` object and it gets a `StringIO` object. Both are non-primitive values. How could we possibly reconstruct them? Serializing ObjectsThe answer to the problem of complex objects lies in creating a _persistent_ representation that can be _reconstructed_ at later points in time. This process is known as _serialization_; in Python, it is also known as _pickling_. The `pickle` module provides means to create a serialized representation of an object. Let us apply this on the `email.parser.Parser` object we just found: ###Code import pickle parser_object = email_parse_argument_list[0][0][1] parser_object pickled = pickle.dumps(parser_object) pickled ###Output _____no_output_____ ###Markdown From this string representing the serialized `email.parser.Parser` object, we can recreate the Parser object at any time: ###Code unpickled_parser_object = pickle.loads(pickled) unpickled_parser_object ###Output _____no_output_____ ###Markdown The serialization mechanism allows us to produce a representation for all objects passed as parameters (assuming they can be pickled, that is). We can now extend the `simple_call_string()` function such that it automatically pickles objects. Additionally, we set it up such that if the first parameter is named `self` (i.e., it is a class method), we make it a method of the `self` object. ###Code def call_value(value): value_as_string = repr(value) if value_as_string.find('<') >= 0: # Complex object value_as_string = "pickle.loads(" + repr(pickle.dumps(value)) + ")" return value_as_string def call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string, pickling complex objects""" if len(argument_list) > 0: (first_var, first_value) = argument_list[0] if first_var == "self": # Make this a method call method_name = function_name.split(".")[-1] function_name = call_value(first_value) + "." + method_name argument_list = argument_list[1:] return function_name + "(" + \ ", ".join([var + "=" + call_value(value) for (var, value) in argument_list]) + ")" ###Output _____no_output_____ ###Markdown Let us apply the extended `call_string()` method to create a call for `email.parser.parse()`, including pickled objects: ###Code call = call_string("email.parser.parse", email_parse_argument_list[0]) print(call) ###Output pickle.loads(b'\x80\x03cemail.parser\nParser\nq\x00)\x81q\x01}q\x02(X\x06\x00\x00\x00_classq\x03chttp.client\nHTTPMessage\nq\x04X\x06\x00\x00\x00policyq\x05cemail._policybase\nCompat32\nq\x06)\x81q\x07ub.').parse(fp=pickle.loads(b'\x80\x03c_io\nStringIO\nq\x00)\x81q\x01(Xe\x01\x00\x00Content-Encoding: gzip\r\nAccept-Ranges: bytes\r\nAge: 327216\r\nCache-Control: max-age=604800\r\nContent-Type: text/html; charset=UTF-8\r\nDate: Sat, 24 Oct 2020 09:39:42 GMT\r\nEtag: "3147526947"\r\nExpires: Sat, 31 Oct 2020 09:39:42 GMT\r\nLast-Modified: Thu, 17 Oct 2019 07:18:26 GMT\r\nServer: ECS (dcb/7FA5)\r\nVary: Accept-Encoding\r\nX-Cache: HIT\r\nContent-Length: 648\r\n\r\nq\x02X\x01\x00\x00\x00\nq\x03Me\x01Ntq\x04b.'), headersonly=False) ###Markdown With this call involvimng the pickled object, we can now re-run the original call and obtain a valid result: ###Code eval(call) ###Output _____no_output_____ ###Markdown All CallsSo far, we have seen only one call of `webbrowser()`. How many of the calls within `webbrowser()` can we actually carve and replay? Let us try this out and compute the numbers. ###Code import traceback import enum import socket all_functions = set(webbrowser_carver.called_functions(qualified=True)) call_success = set() run_success = set() exceptions_seen = set() for function_name in webbrowser_carver.called_functions(qualified=True): for argument_list in webbrowser_carver.arguments(function_name): try: call = call_string(function_name, argument_list) call_success.add(function_name) result = eval(call) run_success.add(function_name) except Exception as exc: exceptions_seen.add(repr(exc)) # print("->", call, file=sys.stderr) # traceback.print_exc() # print("", file=sys.stderr) continue print("%d/%d calls (%.2f%%) successfully created and %d/%d calls (%.2f%%) successfully ran" % ( len(call_success), len(all_functions), len( call_success) * 100 / len(all_functions), len(run_success), len(all_functions), len(run_success) * 100 / len(all_functions))) ###Output 254/317 calls (80.13%) successfully created and 99/317 calls (31.23%) successfully ran ###Markdown About half of the calls succeed. Let us take a look into some of the error messages we get: ###Code for i in range(10): print(list(exceptions_seen)[i]) ###Output NameError("name 'collections' is not defined",) SyntaxError('invalid syntax', ('<string>', 1, 18, "urllib3.util.url.<listcomp>(.0=pickle.loads(b'\\x80\\x03cbuiltins\\niter\\nq\\x00]q\\x01\\x85q\\x02Rq\\x03.'))")) SyntaxError('invalid syntax', ('<string>', 1, 19, "requests.sessions.<listcomp>(.0=pickle.loads(b'\\x80\\x03cbuiltins\\niter\\nq\\x00]q\\x01\\x85q\\x02Rq\\x03.'))")) NameError("name 'requests' is not defined",) PicklingError("Can't pickle <class 'method_descriptor'>: attribute lookup method_descriptor on builtins failed",) SyntaxError('invalid syntax', ('<string>', 1, 20, "urllib3.util.retry.<listcomp>(.0=pickle.loads(b'\\x80\\x03cbuiltins\\niter\\nq\\x00]q\\x01\\x85q\\x02Rq\\x03.'))")) SyntaxError('invalid syntax', ('<string>', 1, 10, 'machine hg.st.cs.uni-saarland.de\n')) SyntaxError('keyword argument repeated', ('<string>', 1, 98, None)) TypeError("can't pickle _thread.RLock objects",) NameError("name 'environ' is not defined",) ###Markdown We see that:* **A large majority of calls could be converted into call strings.** If this is not the case, this is mostly due to having unserialized objects being passed.* **About half of the calls could be executed.** The error messages for the failing runs are varied; the most frequent being that some internal name is invoked that is not in scope. Our carving mechanism should be taken with a grain of salt: We still do not cover the situation where external variables and values (such as global variables) are being accessed, and the serialization mechanism cannot recreate external resources. Still, if the function of interest falls among those that _can_ be carved and replayed, we can very effectively re-run its calls with their original arguments. Mining API Grammars from Carved CallsSo far, we have used carved calls to replay exactly the same invocations as originally encountered. However, we can also _mutate_ carved calls to effectively fuzz APIs with previously recorded arguments.The general idea is as follows:1. First, we record all calls of a specific function from a given execution of the program.2. Second, we create a grammar that incorporates all these calls, with separate rules for each argument and alternatives for each value found; this allows us to produce calls that arbitrarily _recombine_ these arguments.Let us explore these steps in the following sections. From Calls to GrammarsLet us start with an example. The `power(x, y)` function returns $x^y$; it is but a wrapper around the equivalent `math.pow()` function. (Since `power()` is defined in Python, we can trace it – in contrast to `math.pow()`, which is implemented in C.) ###Code import math def power(x, y): return math.pow(x, y) ###Output _____no_output_____ ###Markdown Let us invoke `power()` while recording its arguments: ###Code with CallCarver() as power_carver: z = power(1, 2) z = power(3, 4) power_carver.arguments("power") ###Output _____no_output_____ ###Markdown From this list of recorded arguments, we could now create a grammar for the `power()` call, with `x` and `y` expanding into the values seen: ###Code from Grammars import START_SYMBOL, is_valid_grammar, new_symbol, extend_grammar POWER_GRAMMAR = { "<start>": ["power(<x>, <y>)"], "<x>": ["1", "3"], "<y>": ["2", "4"] } assert is_valid_grammar(POWER_GRAMMAR) ###Output _____no_output_____ ###Markdown When fuzzing with this grammar, we then get arbitrary combinations of `x` and `y`; aiming for coverage will ensure that all values are actually tested at least once: ###Code from GrammarCoverageFuzzer import GrammarCoverageFuzzer power_fuzzer = GrammarCoverageFuzzer(POWER_GRAMMAR) [power_fuzzer.fuzz() for i in range(5)] ###Output _____no_output_____ ###Markdown What we need is a method to automatically convert the arguments as seen in `power_carver` to the grammar as seen in `POWER_GRAMMAR`. This is what we define in the next section. A Grammar Miner for CallsWe introduce a class `CallGrammarMiner`, which, given a `Carver`, automatically produces a grammar from the calls seen. To initialize, we pass the carver object: ###Code class CallGrammarMiner(object): def __init__(self, carver, log=False): self.carver = carver self.log = log ###Output _____no_output_____ ###Markdown Initial GrammarThe initial grammar produces a single call. The possible `` expansions are to be constructed later: ###Code import copy class CallGrammarMiner(CallGrammarMiner): CALL_SYMBOL = "<call>" def initial_grammar(self): return extend_grammar( {START_SYMBOL: [self.CALL_SYMBOL], self.CALL_SYMBOL: [] }) m = CallGrammarMiner(power_carver) initial_grammar = m.initial_grammar() initial_grammar ###Output _____no_output_____ ###Markdown A Grammar from ArgumentsLet us start by creating a grammar from a list of arguments. The method `mine_arguments_grammar()` creates a grammar for the arguments seen during carving, such as these: ###Code arguments = power_carver.arguments("power") arguments ###Output _____no_output_____ ###Markdown The `mine_arguments_grammar()` method iterates through the variables seen and creates a mapping `variables` of variable names to a set of values seen (as strings, going through `call_value()`). In a second step, it then creates a grammar with a rule for each variable name, expanding into the values seen. ###Code class CallGrammarMiner(CallGrammarMiner): def var_symbol(self, function_name, var, grammar): return new_symbol(grammar, "<" + function_name + "-" + var + ">") def mine_arguments_grammar(self, function_name, arguments, grammar): var_grammar = {} variables = {} for argument_list in arguments: for (var, value) in argument_list: value_string = call_value(value) if self.log: print(var, "=", value_string) if value_string.find("<") >= 0: var_grammar["<langle>"] = ["<"] value_string = value_string.replace("<", "<langle>") if var not in variables: variables[var] = set() variables[var].add(value_string) var_symbols = [] for var in variables: var_symbol = self.var_symbol(function_name, var, grammar) var_symbols.append(var_symbol) var_grammar[var_symbol] = list(variables[var]) return var_grammar, var_symbols m = CallGrammarMiner(power_carver) var_grammar, var_symbols = m.mine_arguments_grammar( "power", arguments, initial_grammar) var_grammar ###Output _____no_output_____ ###Markdown The additional return value `var_symbols` is a list of argument symbols in the call: ###Code var_symbols ###Output _____no_output_____ ###Markdown A Grammar from CallsTo get the grammar for a single function (`mine_function_grammar()`), we add a call to the function: ###Code class CallGrammarMiner(CallGrammarMiner): def function_symbol(self, function_name, grammar): return new_symbol(grammar, "<" + function_name + ">") def mine_function_grammar(self, function_name, grammar): arguments = self.carver.arguments(function_name) if self.log: print(function_name, arguments) var_grammar, var_symbols = self.mine_arguments_grammar( function_name, arguments, grammar) function_grammar = var_grammar function_symbol = self.function_symbol(function_name, grammar) if len(var_symbols) > 0 and var_symbols[0].find("-self") >= 0: # Method call function_grammar[function_symbol] = [ var_symbols[0] + "." + function_name + "(" + ", ".join(var_symbols[1:]) + ")"] else: function_grammar[function_symbol] = [ function_name + "(" + ", ".join(var_symbols) + ")"] if self.log: print(function_symbol, "::=", function_grammar[function_symbol]) return function_grammar, function_symbol m = CallGrammarMiner(power_carver) function_grammar, function_symbol = m.mine_function_grammar( "power", initial_grammar) function_grammar ###Output _____no_output_____ ###Markdown The additionally returned `function_symbol` holds the name of the function call just added: ###Code function_symbol ###Output _____no_output_____ ###Markdown A Grammar from all CallsLet us now repeat the above for all function calls seen during carving. To this end, we simply iterate over all function calls seen: ###Code power_carver.called_functions() class CallGrammarMiner(CallGrammarMiner): def mine_call_grammar(self, function_list=None, qualified=False): grammar = self.initial_grammar() fn_list = function_list if function_list is None: fn_list = self.carver.called_functions(qualified=qualified) for function_name in fn_list: if function_list is None and (function_name.startswith("_") or function_name.startswith("<")): continue # Internal function # Ignore errors with mined functions try: function_grammar, function_symbol = self.mine_function_grammar( function_name, grammar) except: if function_list is not None: raise if function_symbol not in grammar[self.CALL_SYMBOL]: grammar[self.CALL_SYMBOL].append(function_symbol) grammar.update(function_grammar) assert is_valid_grammar(grammar) return grammar ###Output _____no_output_____ ###Markdown The method `mine_call_grammar()` is the one that clients can and should use – first for mining... ###Code m = CallGrammarMiner(power_carver) power_grammar = m.mine_call_grammar() power_grammar ###Output _____no_output_____ ###Markdown ...and then for fuzzing: ###Code power_fuzzer = GrammarCoverageFuzzer(power_grammar) [power_fuzzer.fuzz() for i in range(5)] ###Output _____no_output_____ ###Markdown With this, we have successfully extracted a grammar from a recorded execution; in contrast to "simple" carving, our grammar allows us to _recombine_ arguments and thus to fuzz at the API level. Fuzzing Web FunctionsLet us now apply our grammar miner on a larger API – the `urlparse()` function we already encountered during carving. ###Code with CallCarver() as webbrowser_carver: webbrowser("https://www.fuzzingbook.org") webbrowser("http://www.example.com") ###Output _____no_output_____ ###Markdown We can mine a grammar from the calls encountered: ###Code m = CallGrammarMiner(webbrowser_carver) webbrowser_grammar = m.mine_call_grammar() ###Output _____no_output_____ ###Markdown This is a rather large grammar: ###Code call_list = webbrowser_grammar['<call>'] len(call_list) print(call_list[:20]) ###Output ['<webbrowser>', '<default_headers>', '<default_user_agent>', '<update>', '<default_hooks>', '<cookiejar_from_dict>', '<RLock>', '<deepvalues>', '<vals_sorted_by_key>', '<init_poolmanager>', '<mount>', '<prepare_request>', '<merge_cookies>', '<get_netrc_auth>', '<expanduser>', '<encode>', '<exists>', '<urlparse>', '<urlsplit>', '<getpreferredencoding>'] ###Markdown Here's the rule for the `urlsplit()` function: ###Code webbrowser_grammar["<urlsplit>"] ###Output _____no_output_____ ###Markdown Here are the arguments. Note that although we only passed `http://www.fuzzingbook.org` as a parameter, we also see the `https:` variant. That is because opening the `http:` URL automatically redirects to the `https:` URL, which is then also processed by `urlsplit()`. ###Code webbrowser_grammar["<urlsplit-url>"] ###Output _____no_output_____ ###Markdown There also is some variation in the `scheme` argument: ###Code webbrowser_grammar["<urlsplit-scheme>"] ###Output _____no_output_____ ###Markdown If we now apply a fuzzer on these rules, we systematically cover all variations of arguments seen, including, of course, combinations not seen during carving. Again, we are fuzzing at the API level here. ###Code urlsplit_fuzzer = GrammarCoverageFuzzer( webbrowser_grammar, start_symbol="<urlsplit>") for i in range(5): print(urlsplit_fuzzer.fuzz()) ###Output urlsplit('http://www.example.com/', '', True) urlsplit('https://www.fuzzingbook.org/', '', True) urlsplit('http://www.example.com', '', True) urlsplit('https://www.fuzzingbook.org', '', True) urlsplit('http://www.example.com', '', True) ###Markdown Just as seen with carving, running tests at the API level is orders of magnitude faster than executing system tests. Hence, this calls for means to fuzz at the method level: ###Code from urllib.parse import urlsplit from Timer import Timer with Timer() as urlsplit_timer: urlsplit('http://www.fuzzingbook.org/', 'http', True) urlsplit_timer.elapsed_time() with Timer() as webbrowser_timer: webbrowser("http://www.fuzzingbook.org") webbrowser_timer.elapsed_time() webbrowser_timer.elapsed_time() / urlsplit_timer.elapsed_time() ###Output _____no_output_____ ###Markdown But then again, the caveats encountered during carving apply, notably the requirement to recreate the original function environment. If we also alter or recombine arguments, we get the additional risk of _violating an implicit precondition_ – that is, invoking a function with arguments the function was never designed for. Such _false alarms_, resulting from incorrect invocations rather than incorrect implementations, must then be identified (typically manually) and wed out (for instance, by altering or constraining the grammar). The huge speed gains at the API level, however, may well justify this additional investment. SynopsisThis chapter provides means to _record and replay function calls_ during a system test. Since individual function calls are much faster than a whole system run, such "carving" mechanisms have the potential to run tests much faster. Recording CallsThe `CallCarver` class records all calls occurring while it is active. It is used in conjunction with a `with` clause: ###Code with CallCarver() as carver: y = my_sqrt(2) y = my_sqrt(4) ###Output _____no_output_____ ###Markdown After execution, `called_functions()` lists the names of functions encountered: ###Code carver.called_functions() ###Output _____no_output_____ ###Markdown The `arguments()` method lists the arguments recorded for a function. This is a mapping of the function name to a list of lists of arguments; each argument is a pair (parameter name, value). ###Code carver.arguments('my_sqrt') ###Output _____no_output_____ ###Markdown Complex arguments are properly serialized, such that they can be easily restored. Synthesizing CallsWhile such recorded arguments already could be turned into arguments and calls, a much nicer alternative is to create a _grammar_ for recorded calls. This allows to synthesize arbitrary _combinations_ of arguments, and also offers a base for further customization of calls. The `CallGrammarMiner` class turns a list of carved executions into a grammar. ###Code my_sqrt_miner = CallGrammarMiner(carver) my_sqrt_grammar = my_sqrt_miner.mine_call_grammar() my_sqrt_grammar ###Output _____no_output_____ ###Markdown This grammar can be used to synthesize calls. ###Code fuzzer = GrammarCoverageFuzzer(my_sqrt_grammar) fuzzer.fuzz() ###Output _____no_output_____ ###Markdown These calls can be executed in isolation, effectively extracting unit tests from system tests: ###Code eval(fuzzer.fuzz()) ###Output _____no_output_____ ###Markdown Lessons Learned* _Carving_ allows for effective replay of function calls recorded during a system test.* A function call can be _orders of magnitude faster_ than a system invocation.* _Serialization_ allows to create persistent representations of complex objects.* Functions that heavily interact with their environment and/or access external resources are difficult to carve.* From carved calls, one can produce API grammars that arbitrarily combine carved arguments. Next StepsIn the next chapter, we will discuss [how to reduce failure-inducing inputs](Reducer.ipynb). BackgroundCarving was invented by Elbaum et al. \cite{Elbaum2006} and originally implemented for Java. In this chapter, we follow several of their design choices (including recording and serializing method arguments only).The combination of carving and fuzzing at the API level is described in \cite{Kampmann2018}. Exercises Exercise 1: Carving for Regression TestingSo far, during carving, we only have looked into reproducing _calls_, but not into actually checking the _results_ of these calls. This is important for _regression testing_ – i.e. checking whether a change to code does not impede existing functionality. We can build this by recording not only _calls_, but also _return values_ – and then later compare whether the same calls result in the same values. This may not work on all occasions; values that depend on time, randomness, or other external factors may be different. Still, for functionality that abstracts from these details, checking that nothing has changed is an important part of testing. Our aim is to design a class `ResultCarver` that extends `CallCarver` by recording both calls and return values.In a first step, create a `traceit()` method that also tracks return values by extending the `traceit()` method. The `traceit()` event type is `"return"` and the `arg` parameter is the returned value. Here is a prototype that only prints out the returned values: ###Code class ResultCarver(CallCarver): def traceit(self, frame, event, arg): if event == "return": if self._log: print("Result:", arg) super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) ###Output my_sqrt(x=2) Result: 1.414213562373095 __exit__(self=<__main__.ResultCarver object at 0x7fca8c235080>, exc_type=None, exc_value=None, tb=None) ###Markdown Part 1: Store function resultsExtend the above code such that results are _stored_ in a way that associates them with the currently returning function (or method). To this end, you need to keep track of the _current stack of called functions_. **Solution.** Here's a solution, building on the above: ###Code class ResultCarver(CallCarver): def reset(self): super().reset() self._call_stack = [] self._results = {} def add_result(self, function_name, arguments, result): key = simple_call_string(function_name, arguments) self._results[key] = result def traceit(self, frame, event, arg): if event == "call": code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) self._call_stack.append( (function_name, qualified_name, get_arguments(frame))) if event == "return": result = arg (function_name, qualified_name, arguments) = self._call_stack.pop() self.add_result(function_name, arguments, result) if function_name != qualified_name: self.add_result(qualified_name, arguments, result) if self._log: print( simple_call_string( function_name, arguments), "=", result) # Keep on processing current calls super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) result_carver._results ###Output my_sqrt(x=2) my_sqrt(x=2) = 1.414213562373095 __exit__(self=<__main__.ResultCarver object at 0x7fca8ca525c0>, exc_type=None, exc_value=None, tb=None) ###Markdown Part 2: Access results Give it a method `result()` that returns the value recorded for that particular function name and result:```pythonclass ResultCarver(CallCarver): def result(self, function_name, argument): """Returns the result recorded for function_name(argument"""``` **Solution.** This is mostly done in the code for part 1: ###Code class ResultCarver(ResultCarver): def result(self, function_name, argument): key = simple_call_string(function_name, arguments) return self._results[key] ###Output _____no_output_____ ###Markdown Part 3: Produce assertionsFor the functions called during `webbrowser()` execution, create a set of _assertions_ that check whether the result returned is still the same. Test this for `urllib.parse.urlparse()` and `urllib.parse.urlsplit()`. **Solution.** Not too hard now: ###Code with ResultCarver() as webbrowser_result_carver: webbrowser("http://www.example.com") for function_name in ["urllib.parse.urlparse", "urllib.parse.urlsplit"]: for arguments in webbrowser_result_carver.arguments(function_name): try: call = call_string(function_name, arguments) result = webbrowser_result_carver.result(function_name, arguments) print("assert", call, "==", call_value(result)) except Exception: continue ###Output assert urllib.parse.urlparse(url='http://www.example.com', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') ###Markdown We can run these assertions: ###Code from urllib.parse import SplitResult, ParseResult, urlparse, urlsplit assert urlparse( url='http://www.example.com', scheme='', allow_fragments=True) == ParseResult( scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urlsplit( url='http://www.example.com', scheme='', allow_fragments=True) == SplitResult( scheme='http', netloc='www.example.com', path='', query='', fragment='') ###Output _____no_output_____ ###Markdown Carving Unit TestsSo far, we have always generated _system input_, i.e. data that the program as a whole obtains via its input channels. If we are interested in testing only a small set of functions, having to go through the system can be very inefficient. This chapter introduces a technique known as _carving_, which, given a system test, automatically extracts a set of _unit tests_ that replicate the calls seen during the unit test. The key idea is to _record_ such calls such that we can _replay_ them later – as a whole or selectively. On top, we also explore how to synthesize API grammars from carved unit tests; this means that we can _synthesize API tests without having to write a grammar at all._ **Prerequisites*** Carving makes use of dynamic traces of function calls and variables, as introduced in the [chapter on configuration fuzzing](ConfigurationFuzzer.ipynb).* Using grammars to test units was introduced in the [chapter on API fuzzing](APIFuzzer.ipynb). ###Code import fuzzingbook_utils import APIFuzzer ###Output _____no_output_____ ###Markdown SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.Carver import ```and then make use of the following features.This chapter provides means to _record and replay function calls_ during a system test. Since individual function calls are much faster than a whole system run, such "carving" mechanisms have the potential to run tests much faster. Recording CallsThe `CallCarver` class records all calls occurring while it is active. It is used in conjunction with a `with` clause:```python>>> with CallCarver() as carver:>>> y = my_sqrt(2)>>> y = my_sqrt(4)```After execution, `called_functions()` lists the names of functions encountered:```python>>> carver.called_functions()['my_sqrt', '__exit__']```The `arguments()` method lists the arguments recorded for a function. This is a mapping of the function name to a list of lists of arguments; each argument is a pair (parameter name, value).```python>>> carver.arguments('my_sqrt')[[('x', 2)], [('x', 4)]]```Complex arguments are properly serialized, such that they can be easily restored. Synthesizing CallsWhile such recorded arguments already could be turned into arguments and calls, a much nicer alternative is to create a _grammar_ for recorded calls. This allows to synthesize arbitrary _combinations_ of arguments, and also offers a base for further customization of calls.The `CallGrammarMiner` class turns a list of carved executions into a grammar.```python>>> my_sqrt_miner = CallGrammarMiner(carver)>>> my_sqrt_grammar = my_sqrt_miner.mine_call_grammar()>>> my_sqrt_grammar{'': [''], '': [''], '': ['4', '2'], '': ['my_sqrt()']}```This grammar can be used to synthesize calls.```python>>> fuzzer = GrammarCoverageFuzzer(my_sqrt_grammar)>>> fuzzer.fuzz()'my_sqrt(2)'```These calls can be executed in isolation, effectively extracting unit tests from system tests:```python>>> eval(fuzzer.fuzz())2.0``` System Tests vs Unit TestsRemember the URL grammar introduced for [grammar fuzzing](Grammars.ipynb)? With such a grammar, we can happily test a Web browser again and again, checking how it reacts to arbitrary page requests.Let us define a very simple "web browser" that goes and downloads the content given by the URL. ###Code import urllib.parse def webbrowser(url): """Download the http/https resource given by the URL""" import requests # Only import if needed r = requests.get(url) return r.text ###Output _____no_output_____ ###Markdown Let us apply this on [fuzzingbook.org](https://www.fuzzingbook.org/) and measure the time, using the [Timer class](Timer.ipynb): ###Code from Timer import Timer with Timer() as webbrowser_timer: fuzzingbook_contents = webbrowser( "http://www.fuzzingbook.org/html/Fuzzer.html") print("Downloaded %d bytes in %.2f seconds" % (len(fuzzingbook_contents), webbrowser_timer.elapsed_time())) fuzzingbook_contents[:100] ###Output _____no_output_____ ###Markdown A full webbrowser, of course, would also render the HTML content. We can achieve this using these commands (but we don't, as we do not want to replicate the entire Web page here):```pythonfrom IPython.display import HTML, displayHTML(fuzzingbook_contents)``` Having to start a whole browser (or having it render a Web page) again and again means lots of overhead, though – in particular if we want to test only a subset of its functionality. In particular, after a change in the code, we would prefer to test only the subset of functions that is affected by the change, rather than running the well-tested functions again and again. Let us assume we change the function that takes care of parsing the given URL and decomposing it into the individual elements – the scheme ("http"), the network location (`"www.fuzzingbook.com"`), or the path (`"/html/Fuzzer.html"`). This function is named `urlparse()`: ###Code from urllib.parse import urlparse urlparse('https://www.fuzzingbook.com/html/Carver.html') ###Output _____no_output_____ ###Markdown You see how the individual elements of the URL – the _scheme_ (`"http"`), the _network location_ (`"www.fuzzingbook.com"`), or the path (`"//html/Carver.html"`) are all properly identified. Other elements (like `params`, `query`, or `fragment`) are empty, because they were not part of our input. The interesting thing is that executing only `urlparse()` is orders of magnitude faster than running all of `webbrowser()`. Let us measure the factor: ###Code runs = 1000 with Timer() as urlparse_timer: for i in range(runs): urlparse('https://www.fuzzingbook.com/html/Carver.html') avg_urlparse_time = urlparse_timer.elapsed_time() / 1000 avg_urlparse_time ###Output _____no_output_____ ###Markdown Compare this to the time required by the webbrowser ###Code webbrowser_timer.elapsed_time() ###Output _____no_output_____ ###Markdown The difference in time is huge: ###Code webbrowser_timer.elapsed_time() / avg_urlparse_time ###Output _____no_output_____ ###Markdown Hence, in the time it takes to run `webbrowser()` once, we can have _tens of thousands_ of executions of `urlparse()` – and this does not even take into account the time it takes the browser to render the downloaded HTML, to run the included scripts, and whatever else happens when a Web page is loaded. Hence, strategies that allow us to test at the _unit_ level are very promising as they can save lots of overhead. Carving Unit TestsTesting methods and functions at the unit level requires a very good understanding of the individual units to be tested as well as their interplay with other units. Setting up an appropriate infrastructure and writing unit tests by hand thus is demanding, yet rewarding. There is, however, an interesting alternative to writing unit tests by hand. The technique of _carving_ automatically _converts system tests into unit tests_ by means of recording and replaying function calls:1. During a system test (given or generated), we _record_ all calls into a function, including all arguments and other variables the function reads.2. From these, we synthesize a self-contained _unit test_ that reconstructs the function call with all arguments.3. This unit test can be executed (replayed) at any time with high efficiency.In the remainder of this chapter, let us explore these steps. Recording CallsOur first challenge is to record function calls together with their arguments. (In the interest of simplicity, we restrict ourself to arguments, ignoring any global variables or other non-arguments that are read by the function.) To record calls and arguments, we use the mechanism [we introduced for coverage](Coverage.ipynb): By setting up a tracer function, we track all calls into individual functions, also saving their arguments. Just like `Coverage` objects, we want to use `Carver` objects to be able to be used in conjunction with the `with` statement, such that we can trace a particular code block:```pythonwith Carver() as carver: function_to_be_traced()c = carver.calls()```The initial definition supports this construct: \todo{Get tracker from [dynamic invariants](DynamicInvariants.ipynb)} ###Code import sys class Carver(object): def __init__(self, log=False): self._log = log self.reset() def reset(self): self._calls = {} # Start of `with` block def __enter__(self): self.original_trace_function = sys.gettrace() sys.settrace(self.traceit) return self # End of `with` block def __exit__(self, exc_type, exc_value, tb): sys.settrace(self.original_trace_function) ###Output _____no_output_____ ###Markdown The actual work takes place in the `traceit()` method, which records all calls in the `_calls` attribute. First, we define two helper functions: ###Code import inspect def get_qualified_name(code): """Return the fully qualified name of the current function""" name = code.co_name module = inspect.getmodule(code) if module is not None: name = module.__name__ + "." + name return name def get_arguments(frame): """Return call arguments in the given frame""" # When called, all arguments are local variables arguments = [(var, frame.f_locals[var]) for var in frame.f_locals] arguments.reverse() # Want same order as call return arguments class CallCarver(Carver): def add_call(self, function_name, arguments): """Add given call to list of calls""" if function_name not in self._calls: self._calls[function_name] = [] self._calls[function_name].append(arguments) # Tracking function: Record all calls and all args def traceit(self, frame, event, arg): if event != "call": return None code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) arguments = get_arguments(frame) self.add_call(function_name, arguments) if qualified_name != function_name: self.add_call(qualified_name, arguments) if self._log: print(simple_call_string(function_name, arguments)) return None ###Output _____no_output_____ ###Markdown Finally, we need some convenience functions to access the calls: ###Code class CallCarver(CallCarver): def calls(self): """Return a dictionary of all calls traced.""" return self._calls def arguments(self, function_name): """Return a list of all arguments of the given function as (VAR, VALUE) pairs. Raises an exception if the function was not traced.""" return self._calls[function_name] def called_functions(self, qualified=False): """Return all functions called.""" if qualified: return [function_name for function_name in self._calls.keys() if function_name.find('.') >= 0] else: return [function_name for function_name in self._calls.keys() if function_name.find('.') < 0] ###Output _____no_output_____ ###Markdown Recording my_sqrt() Let's try out our new `Carver` class – first on a very simple function: ###Code from Intro_Testing import my_sqrt with CallCarver() as sqrt_carver: my_sqrt(2) my_sqrt(4) ###Output _____no_output_____ ###Markdown We can retrieve all calls seen... ###Code sqrt_carver.calls() sqrt_carver.called_functions() ###Output _____no_output_____ ###Markdown ... as well as the arguments of a particular function: ###Code sqrt_carver.arguments("my_sqrt") ###Output _____no_output_____ ###Markdown We define a convenience function for nicer printing of these lists: ###Code def simple_call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string""" return function_name + "(" + \ ", ".join([var + "=" + repr(value) for (var, value) in argument_list]) + ")" for function_name in sqrt_carver.called_functions(): for argument_list in sqrt_carver.arguments(function_name): print(simple_call_string(function_name, argument_list)) ###Output my_sqrt(x=2) my_sqrt(x=4) __exit__(self=<__main__.CallCarver object at 0x7f84a1c77588>, exc_type=None, exc_value=None, tb=None) ###Markdown This is a syntax we can directly use to invoke `my_sqrt()` again: ###Code eval("my_sqrt(x=2)") ###Output _____no_output_____ ###Markdown Carving urlparse() What happens if we apply this to `webbrowser()`? ###Code with CallCarver() as webbrowser_carver: webbrowser("http://www.example.com") ###Output _____no_output_____ ###Markdown We see that retrieving a URL from the Web requires quite some functionality: ###Code function_list = webbrowser_carver.called_functions(qualified=True) len(function_list) print(function_list[:50]) ###Output ['requests.api.get', 'requests.api.request', 'requests.sessions.__init__', 'requests.utils.default_headers', 'requests.utils.default_user_agent', 'requests.structures.__init__', 'collections.abc.update', 'abc.__instancecheck__', '_weakrefset.__contains__', 'requests.structures.__setitem__', 'requests.hooks.default_hooks', 'requests.hooks.<dictcomp>', 'requests.cookies.cookiejar_from_dict', 'http.cookiejar.__init__', 'threading.RLock', 'http.cookiejar.__iter__', 'requests.cookies.<listcomp>', 'http.cookiejar.deepvalues', 'http.cookiejar.vals_sorted_by_key', 'requests.adapters.__init__', 'urllib3.util.retry.__init__', 'requests.adapters.init_poolmanager', 'urllib3.poolmanager.__init__', 'urllib3.request.__init__', 'urllib3._collections.__init__', 'requests.sessions.mount', 'requests.sessions.<listcomp>', 'requests.sessions.__enter__', 'requests.sessions.request', 'requests.models.__init__', 'requests.sessions.prepare_request', 'requests.cookies.merge_cookies', 'requests.cookies.update', 'requests.utils.get_netrc_auth', 'posixpath.expanduser', 'posixpath._get_sep', 'collections.abc.__contains__', 'os.__getitem__', 'os.encode', 'os.decode', 'genericpath.exists', 'urllib.parse.urlparse', 'urllib.parse._coerce_args', 'urllib.parse.urlsplit', 'urllib.parse._splitnetloc', 'urllib.parse._noop', 'netrc.__init__', '_bootlocale.getpreferredencoding', 'codecs.__init__', 'netrc._parse'] ###Markdown Among several other functions, we also have a call to `urlparse()`: ###Code urlparse_argument_list = webbrowser_carver.arguments("urllib.parse.urlparse") urlparse_argument_list ###Output _____no_output_____ ###Markdown Again, we can convert this into a well-formatted call: ###Code urlparse_call = simple_call_string("urlparse", urlparse_argument_list[0]) urlparse_call ###Output _____no_output_____ ###Markdown Again, we can re-execute this call: ###Code eval(urlparse_call) ###Output _____no_output_____ ###Markdown We now have successfully carved the call to `urlparse()` out of the `webbrowser()` execution. Replaying Calls Replaying calls in their entirety and in all generality is tricky, as there are several challenges to be addressed. These include:1. We need to be able to _access_ individual functions. If we access a function by name, the name must be in scope. If the name is not visible (for instance, because it is a name internal to the module), we must make it visible.2. Any _resources_ accessed outside of arguments must be recorded and reconstructed for replay as well. This can be difficult if variables refer to external resources such as files or network resources.3. _Complex objects_ must be reconstructed as well.These constraints make carving hard or even impossible if the function to be tested interacts heavily with its environment. To illustrate these issues, consider the `email.parser.parse()` method that is invoked in `webbrowser()`: ###Code email_parse_argument_list = webbrowser_carver.arguments("email.parser.parse") ###Output _____no_output_____ ###Markdown Calls to this method look like this: ###Code email_parse_call = simple_call_string( "email.parser.parse", email_parse_argument_list[0]) email_parse_call ###Output _____no_output_____ ###Markdown We see that `email.parser.parse()` is part of a `email.parser.Parser` object and it gets a `StringIO` object. Both are non-primitive values. How could we possibly reconstruct them? Serializing ObjectsThe answer to the problem of complex objects lies in creating a _persistent_ representation that can be _reconstructed_ at later points in time. This process is known as _serialization_; in Python, it is also known as _pickling_. The `pickle` module provides means to create a serialized representation of an object. Let us apply this on the `email.parser.Parser` object we just found: ###Code import pickle parser_object = email_parse_argument_list[0][0][1] parser_object pickled = pickle.dumps(parser_object) pickled ###Output _____no_output_____ ###Markdown From this string representing the serialized `email.parser.Parser` object, we can recreate the Parser object at any time: ###Code unpickled_parser_object = pickle.loads(pickled) unpickled_parser_object ###Output _____no_output_____ ###Markdown The serialization mechanism allows us to produce a representation for all objects passed as parameters (assuming they can be pickled, that is). We can now extend the `simple_call_string()` function such that it automatically pickles objects. Additionally, we set it up such that if the first parameter is named `self` (i.e., it is a class method), we make it a method of the `self` object. ###Code def call_value(value): value_as_string = repr(value) if value_as_string.find('<') >= 0: # Complex object value_as_string = "pickle.loads(" + repr(pickle.dumps(value)) + ")" return value_as_string def call_string(function_name, argument_list): """Return function_name(arg[0], arg[1], ...) as a string, pickling complex objects""" if len(argument_list) > 0: (first_var, first_value) = argument_list[0] if first_var == "self": # Make this a method call method_name = function_name.split(".")[-1] function_name = call_value(first_value) + "." + method_name argument_list = argument_list[1:] return function_name + "(" + \ ", ".join([var + "=" + call_value(value) for (var, value) in argument_list]) + ")" ###Output _____no_output_____ ###Markdown Let us apply the extended `call_string()` method to create a call for `email.parser.parse()`, including pickled objects: ###Code call = call_string("email.parser.parse", email_parse_argument_list[0]) print(call) ###Output pickle.loads(b'\x80\x03cemail.parser\nParser\nq\x00)\x81q\x01}q\x02(X\x06\x00\x00\x00_classq\x03chttp.client\nHTTPMessage\nq\x04X\x06\x00\x00\x00policyq\x05cemail._policybase\nCompat32\nq\x06)\x81q\x07ub.').parse(fp=pickle.loads(b'\x80\x03c_io\nStringIO\nq\x00)\x81q\x01(Xe\x01\x00\x00Content-Encoding: gzip\r\nAccept-Ranges: bytes\r\nAge: 495006\r\nCache-Control: max-age=604800\r\nContent-Type: text/html; charset=UTF-8\r\nDate: Wed, 24 Jun 2020 16:17:02 GMT\r\nEtag: "3147526947"\r\nExpires: Wed, 01 Jul 2020 16:17:02 GMT\r\nLast-Modified: Thu, 17 Oct 2019 07:18:26 GMT\r\nServer: ECS (dcb/7F3B)\r\nVary: Accept-Encoding\r\nX-Cache: HIT\r\nContent-Length: 648\r\n\r\nq\x02X\x01\x00\x00\x00\nq\x03Me\x01Ntq\x04b.'), headersonly=False) ###Markdown With this call involvimng the pickled object, we can now re-run the original call and obtain a valid result: ###Code eval(call) ###Output _____no_output_____ ###Markdown All CallsSo far, we have seen only one call of `webbrowser()`. How many of the calls within `webbrowser()` can we actually carve and replay? Let us try this out and compute the numbers. ###Code import traceback import enum import socket all_functions = set(webbrowser_carver.called_functions(qualified=True)) call_success = set() run_success = set() exceptions_seen = set() for function_name in webbrowser_carver.called_functions(qualified=True): for argument_list in webbrowser_carver.arguments(function_name): try: call = call_string(function_name, argument_list) call_success.add(function_name) result = eval(call) run_success.add(function_name) except Exception as exc: exceptions_seen.add(repr(exc)) # print("->", call, file=sys.stderr) # traceback.print_exc() # print("", file=sys.stderr) continue print("%d/%d calls (%.2f%%) successfully created and %d/%d calls (%.2f%%) successfully ran" % ( len(call_success), len(all_functions), len( call_success) * 100 / len(all_functions), len(run_success), len(all_functions), len(run_success) * 100 / len(all_functions))) ###Output 241/304 calls (79.28%) successfully created and 99/304 calls (32.57%) successfully ran ###Markdown About half of the calls succeed. Let us take a look into some of the error messages we get: ###Code for i in range(10): print(list(exceptions_seen)[i]) ###Output PicklingError("Can't pickle <class 'method_descriptor'>: attribute lookup method_descriptor on builtins failed",) ClosedPoolError("HTTPConnectionPool(host='www.example.com', port=80): Pool is closed.",) NameError("name 'http' is not defined",) TimeoutStateError('Timeout timer has already been started.',) TypeError('__getitem__() takes no keyword arguments',) NameError("name 'Retry' is not defined",) NameError("name 'threading' is not defined",) NameError("name '_bootlocale' is not defined",) TypeError('Cannot serialize socket object',) NameError("name 'os' is not defined",) ###Markdown We see that:* **A large majority of calls could be converted into call strings.** If this is not the case, this is mostly due to having unserialized objects being passed.* **About half of the calls could be executed.** The error messages for the failing runs are varied; the most frequent being that some internal name is invoked that is not in scope. Our carving mechanism should be taken with a grain of salt: We still do not cover the situation where external variables and values (such as global variables) are being accessed, and the serialization mechanism cannot recreate external resources. Still, if the function of interest falls among those that _can_ be carved and replayed, we can very effectively re-run its calls with their original arguments. Mining API Grammars from Carved CallsSo far, we have used carved calls to replay exactly the same invocations as originally encountered. However, we can also _mutate_ carved calls to effectively fuzz APIs with previously recorded arguments.The general idea is as follows:1. First, we record all calls of a specific function from a given execution of the program.2. Second, we create a grammar that incorporates all these calls, with separate rules for each argument and alternatives for each value found; this allows us to produce calls that arbitrarily _recombine_ these arguments.Let us explore these steps in the following sections. From Calls to GrammarsLet us start with an example. The `power(x, y)` function returns $x^y$; it is but a wrapper around the equivalent `math.pow()` function. (Since `power()` is defined in Python, we can trace it – in contrast to `math.pow()`, which is implemented in C.) ###Code import math def power(x, y): return math.pow(x, y) ###Output _____no_output_____ ###Markdown Let us invoke `power()` while recording its arguments: ###Code with CallCarver() as power_carver: z = power(1, 2) z = power(3, 4) power_carver.arguments("power") ###Output _____no_output_____ ###Markdown From this list of recorded arguments, we could now create a grammar for the `power()` call, with `x` and `y` expanding into the values seen: ###Code from Grammars import START_SYMBOL, is_valid_grammar, new_symbol, extend_grammar POWER_GRAMMAR = { "<start>": ["power(<x>, <y>)"], "<x>": ["1", "3"], "<y>": ["2", "4"] } assert is_valid_grammar(POWER_GRAMMAR) ###Output _____no_output_____ ###Markdown When fuzzing with this grammar, we then get arbitrary combinations of `x` and `y`; aiming for coverage will ensure that all values are actually tested at least once: ###Code from GrammarCoverageFuzzer import GrammarCoverageFuzzer power_fuzzer = GrammarCoverageFuzzer(POWER_GRAMMAR) [power_fuzzer.fuzz() for i in range(5)] ###Output _____no_output_____ ###Markdown What we need is a method to automatically convert the arguments as seen in `power_carver` to the grammar as seen in `POWER_GRAMMAR`. This is what we define in the next section. A Grammar Miner for CallsWe introduce a class `CallGrammarMiner`, which, given a `Carver`, automatically produces a grammar from the calls seen. To initialize, we pass the carver object: ###Code class CallGrammarMiner(object): def __init__(self, carver, log=False): self.carver = carver self.log = log ###Output _____no_output_____ ###Markdown Initial GrammarThe initial grammar produces a single call. The possible `` expansions are to be constructed later: ###Code import copy class CallGrammarMiner(CallGrammarMiner): CALL_SYMBOL = "<call>" def initial_grammar(self): return extend_grammar( {START_SYMBOL: [self.CALL_SYMBOL], self.CALL_SYMBOL: [] }) m = CallGrammarMiner(power_carver) initial_grammar = m.initial_grammar() initial_grammar ###Output _____no_output_____ ###Markdown A Grammar from ArgumentsLet us start by creating a grammar from a list of arguments. The method `mine_arguments_grammar()` creates a grammar for the arguments seen during carving, such as these: ###Code arguments = power_carver.arguments("power") arguments ###Output _____no_output_____ ###Markdown The `mine_arguments_grammar()` method iterates through the variables seen and creates a mapping `variables` of variable names to a set of values seen (as strings, going through `call_value()`). In a second step, it then creates a grammar with a rule for each variable name, expanding into the values seen. ###Code class CallGrammarMiner(CallGrammarMiner): def var_symbol(self, function_name, var, grammar): return new_symbol(grammar, "<" + function_name + "-" + var + ">") def mine_arguments_grammar(self, function_name, arguments, grammar): var_grammar = {} variables = {} for argument_list in arguments: for (var, value) in argument_list: value_string = call_value(value) if self.log: print(var, "=", value_string) if value_string.find("<") >= 0: var_grammar["<langle>"] = ["<"] value_string = value_string.replace("<", "<langle>") if var not in variables: variables[var] = set() variables[var].add(value_string) var_symbols = [] for var in variables: var_symbol = self.var_symbol(function_name, var, grammar) var_symbols.append(var_symbol) var_grammar[var_symbol] = list(variables[var]) return var_grammar, var_symbols m = CallGrammarMiner(power_carver) var_grammar, var_symbols = m.mine_arguments_grammar( "power", arguments, initial_grammar) var_grammar ###Output _____no_output_____ ###Markdown The additional return value `var_symbols` is a list of argument symbols in the call: ###Code var_symbols ###Output _____no_output_____ ###Markdown A Grammar from CallsTo get the grammar for a single function (`mine_function_grammar()`), we add a call to the function: ###Code class CallGrammarMiner(CallGrammarMiner): def function_symbol(self, function_name, grammar): return new_symbol(grammar, "<" + function_name + ">") def mine_function_grammar(self, function_name, grammar): arguments = self.carver.arguments(function_name) if self.log: print(function_name, arguments) var_grammar, var_symbols = self.mine_arguments_grammar( function_name, arguments, grammar) function_grammar = var_grammar function_symbol = self.function_symbol(function_name, grammar) if len(var_symbols) > 0 and var_symbols[0].find("-self") >= 0: # Method call function_grammar[function_symbol] = [ var_symbols[0] + "." + function_name + "(" + ", ".join(var_symbols[1:]) + ")"] else: function_grammar[function_symbol] = [ function_name + "(" + ", ".join(var_symbols) + ")"] if self.log: print(function_symbol, "::=", function_grammar[function_symbol]) return function_grammar, function_symbol m = CallGrammarMiner(power_carver) function_grammar, function_symbol = m.mine_function_grammar( "power", initial_grammar) function_grammar ###Output _____no_output_____ ###Markdown The additionally returned `function_symbol` holds the name of the function call just added: ###Code function_symbol ###Output _____no_output_____ ###Markdown A Grammar from all CallsLet us now repeat the above for all function calls seen during carving. To this end, we simply iterate over all function calls seen: ###Code power_carver.called_functions() class CallGrammarMiner(CallGrammarMiner): def mine_call_grammar(self, function_list=None, qualified=False): grammar = self.initial_grammar() fn_list = function_list if function_list is None: fn_list = self.carver.called_functions(qualified=qualified) for function_name in fn_list: if function_list is None and (function_name.startswith("_") or function_name.startswith("<")): continue # Internal function # Ignore errors with mined functions try: function_grammar, function_symbol = self.mine_function_grammar( function_name, grammar) except: if function_list is not None: raise if function_symbol not in grammar[self.CALL_SYMBOL]: grammar[self.CALL_SYMBOL].append(function_symbol) grammar.update(function_grammar) assert is_valid_grammar(grammar) return grammar ###Output _____no_output_____ ###Markdown The method `mine_call_grammar()` is the one that clients can and should use – first for mining... ###Code m = CallGrammarMiner(power_carver) power_grammar = m.mine_call_grammar() power_grammar ###Output _____no_output_____ ###Markdown ...and then for fuzzing: ###Code power_fuzzer = GrammarCoverageFuzzer(power_grammar) [power_fuzzer.fuzz() for i in range(5)] ###Output _____no_output_____ ###Markdown With this, we have successfully extracted a grammar from a recorded execution; in contrast to "simple" carving, our grammar allows us to _recombine_ arguments and thus to fuzz at the API level. Fuzzing Web FunctionsLet us now apply our grammar miner on a larger API – the `urlparse()` function we already encountered during carving. ###Code with CallCarver() as webbrowser_carver: webbrowser("https://www.fuzzingbook.org") webbrowser("http://www.example.com") ###Output _____no_output_____ ###Markdown We can mine a grammar from the calls encountered: ###Code m = CallGrammarMiner(webbrowser_carver) webbrowser_grammar = m.mine_call_grammar() ###Output _____no_output_____ ###Markdown This is a rather large grammar: ###Code call_list = webbrowser_grammar['<call>'] len(call_list) print(call_list[:20]) ###Output ['<webbrowser>', '<default_headers>', '<default_user_agent>', '<update>', '<default_hooks>', '<cookiejar_from_dict>', '<RLock>', '<deepvalues>', '<vals_sorted_by_key>', '<init_poolmanager>', '<mount>', '<prepare_request>', '<merge_cookies>', '<get_netrc_auth>', '<expanduser>', '<encode>', '<decode>', '<exists>', '<urlparse>', '<urlsplit>'] ###Markdown Here's the rule for the `urlsplit()` function: ###Code webbrowser_grammar["<urlsplit>"] ###Output _____no_output_____ ###Markdown Here are the arguments. Note that although we only passed `http://www.fuzzingbook.org` as a parameter, we also see the `https:` variant. That is because opening the `http:` URL automatically redirects to the `https:` URL, which is then also processed by `urlsplit()`. ###Code webbrowser_grammar["<urlsplit-url>"] ###Output _____no_output_____ ###Markdown There also is some variation in the `scheme` argument: ###Code webbrowser_grammar["<urlsplit-scheme>"] ###Output _____no_output_____ ###Markdown If we now apply a fuzzer on these rules, we systematically cover all variations of arguments seen, including, of course, combinations not seen during carving. Again, we are fuzzing at the API level here. ###Code urlsplit_fuzzer = GrammarCoverageFuzzer( webbrowser_grammar, start_symbol="<urlsplit>") for i in range(5): print(urlsplit_fuzzer.fuzz()) ###Output urlsplit('https://www.fuzzingbook.org', '', True) urlsplit('https://www.fuzzingbook.org/', '', True) urlsplit('http://www.example.com/', '', True) urlsplit('http://www.example.com', '', True) urlsplit('http://www.example.com/', '', True) ###Markdown Just as seen with carving, running tests at the API level is orders of magnitude faster than executing system tests. Hence, this calls for means to fuzz at the method level: ###Code from urllib.parse import urlsplit from Timer import Timer with Timer() as urlsplit_timer: urlsplit('http://www.fuzzingbook.org/', 'http', True) urlsplit_timer.elapsed_time() with Timer() as webbrowser_timer: webbrowser("http://www.fuzzingbook.org") webbrowser_timer.elapsed_time() webbrowser_timer.elapsed_time() / urlsplit_timer.elapsed_time() ###Output _____no_output_____ ###Markdown But then again, the caveats encountered during carving apply, notably the requirement to recreate the original function environment. If we also alter or recombine arguments, we get the additional risk of _violating an implicit precondition_ – that is, invoking a function with arguments the function was never designed for. Such _false alarms_, resulting from incorrect invocations rather than incorrect implementations, must then be identified (typically manually) and wed out (for instance, by altering or constraining the grammar). The huge speed gains at the API level, however, may well justify this additional investment. SynopsisThis chapter provides means to _record and replay function calls_ during a system test. Since individual function calls are much faster than a whole system run, such "carving" mechanisms have the potential to run tests much faster. Recording CallsThe `CallCarver` class records all calls occurring while it is active. It is used in conjunction with a `with` clause: ###Code with CallCarver() as carver: y = my_sqrt(2) y = my_sqrt(4) ###Output _____no_output_____ ###Markdown After execution, `called_functions()` lists the names of functions encountered: ###Code carver.called_functions() ###Output _____no_output_____ ###Markdown The `arguments()` method lists the arguments recorded for a function. This is a mapping of the function name to a list of lists of arguments; each argument is a pair (parameter name, value). ###Code carver.arguments('my_sqrt') ###Output _____no_output_____ ###Markdown Complex arguments are properly serialized, such that they can be easily restored. Synthesizing CallsWhile such recorded arguments already could be turned into arguments and calls, a much nicer alternative is to create a _grammar_ for recorded calls. This allows to synthesize arbitrary _combinations_ of arguments, and also offers a base for further customization of calls. The `CallGrammarMiner` class turns a list of carved executions into a grammar. ###Code my_sqrt_miner = CallGrammarMiner(carver) my_sqrt_grammar = my_sqrt_miner.mine_call_grammar() my_sqrt_grammar ###Output _____no_output_____ ###Markdown This grammar can be used to synthesize calls. ###Code fuzzer = GrammarCoverageFuzzer(my_sqrt_grammar) fuzzer.fuzz() ###Output _____no_output_____ ###Markdown These calls can be executed in isolation, effectively extracting unit tests from system tests: ###Code eval(fuzzer.fuzz()) ###Output _____no_output_____ ###Markdown Lessons Learned* _Carving_ allows for effective replay of function calls recorded during a system test.* A function call can be _orders of magnitude faster_ than a system invocation.* _Serialization_ allows to create persistent representations of complex objects.* Functions that heavily interact with their environment and/or access external resources are difficult to carve.* From carved calls, one can produce API grammars that arbitrarily combine carved arguments. Next StepsIn the next chapter, we will discuss [how to reduce failure-inducing inputs](Reducer.ipynb). BackgroundCarving was invented by Elbaum et al. \cite{Elbaum2006} and originally implemented for Java. In this chapter, we follow several of their design choices (including recording and serializing method arguments only).The combination of carving and fuzzing at the API level is described in \cite{Kampmann2018}. Exercises Exercise 1: Carving for Regression TestingSo far, during carving, we only have looked into reproducing _calls_, but not into actually checking the _results_ of these calls. This is important for _regression testing_ – i.e. checking whether a change to code does not impede existing functionality. We can build this by recording not only _calls_, but also _return values_ – and then later compare whether the same calls result in the same values. This may not work on all occasions; values that depend on time, randomness, or other external factors may be different. Still, for functionality that abstracts from these details, checking that nothing has changed is an important part of testing. Our aim is to design a class `ResultCarver` that extends `CallCarver` by recording both calls and return values.In a first step, create a `traceit()` method that also tracks return values by extending the `traceit()` method. The `traceit()` event type is `"return"` and the `arg` parameter is the returned value. Here is a prototype that only prints out the returned values: ###Code class ResultCarver(CallCarver): def traceit(self, frame, event, arg): if event == "return": if self._log: print("Result:", arg) super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) ###Output my_sqrt(x=2) Result: 1.414213562373095 __exit__(self=<__main__.ResultCarver object at 0x7f84a29be160>, exc_type=None, exc_value=None, tb=None) ###Markdown Part 1: Store function resultsExtend the above code such that results are _stored_ in a way that associates them with the currently returning function (or method). To this end, you need to keep track of the _current stack of called functions_. **Solution.** Here's a solution, building on the above: ###Code class ResultCarver(CallCarver): def reset(self): super().reset() self._call_stack = [] self._results = {} def add_result(self, function_name, arguments, result): key = simple_call_string(function_name, arguments) self._results[key] = result def traceit(self, frame, event, arg): if event == "call": code = frame.f_code function_name = code.co_name qualified_name = get_qualified_name(code) self._call_stack.append( (function_name, qualified_name, get_arguments(frame))) if event == "return": result = arg (function_name, qualified_name, arguments) = self._call_stack.pop() self.add_result(function_name, arguments, result) if function_name != qualified_name: self.add_result(qualified_name, arguments, result) if self._log: print( simple_call_string( function_name, arguments), "=", result) # Keep on processing current calls super().traceit(frame, event, arg) # Need to return traceit function such that it is invoked for return # events return self.traceit with ResultCarver(log=True) as result_carver: my_sqrt(2) result_carver._results ###Output my_sqrt(x=2) my_sqrt(x=2) = 1.414213562373095 __exit__(self=<__main__.ResultCarver object at 0x7f84a29bed68>, exc_type=None, exc_value=None, tb=None) ###Markdown Part 2: Access results Give it a method `result()` that returns the value recorded for that particular function name and result:```pythonclass ResultCarver(CallCarver): def result(self, function_name, argument): """Returns the result recorded for function_name(argument"""``` **Solution.** This is mostly done in the code for part 1: ###Code class ResultCarver(ResultCarver): def result(self, function_name, argument): key = simple_call_string(function_name, arguments) return self._results[key] ###Output _____no_output_____ ###Markdown Part 3: Produce assertionsFor the functions called during `webbrowser()` execution, create a set of _assertions_ that check whether the result returned is still the same. Test this for `urllib.parse.urlparse()` and `urllib.parse.urlsplit()`. **Solution.** Not too hard now: ###Code with ResultCarver() as webbrowser_result_carver: webbrowser("http://www.example.com") for function_name in ["urllib.parse.urlparse", "urllib.parse.urlsplit"]: for arguments in webbrowser_result_carver.arguments(function_name): try: call = call_string(function_name, arguments) result = webbrowser_result_carver.result(function_name, arguments) print("assert", call, "==", call_value(result)) except Exception: continue ###Output assert urllib.parse.urlparse(url='http://www.example.com', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlparse(url='http://www.example.com/', scheme='', allow_fragments=True) == ParseResult(scheme='http', netloc='www.example.com', path='/', params='', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') assert urllib.parse.urlsplit(url='http://www.example.com/', scheme='', allow_fragments=True) == SplitResult(scheme='http', netloc='www.example.com', path='/', query='', fragment='') ###Markdown We can run these assertions: ###Code from urllib.parse import SplitResult, ParseResult, urlparse, urlsplit assert urlparse( url='http://www.example.com', scheme='', allow_fragments=True) == ParseResult( scheme='http', netloc='www.example.com', path='', params='', query='', fragment='') assert urlsplit( url='http://www.example.com', scheme='', allow_fragments=True) == SplitResult( scheme='http', netloc='www.example.com', path='', query='', fragment='') ###Output _____no_output_____
Exploratory Analysis Problem 1.ipynb
###Markdown Project Website: https://mohamedirfansh.github.io/Airbnb-Data-Science-Project/ Problem Definition: What are the factors and features of a listing that make an AirBnb listing more expensive? Practical MotivationAirBnb has provided many travellers a great, easy and convenient place to stay during their travels. Similarly, it has also However, with so many listings available with varying prices, how can an aspiring host know what type of property to invest in if his main aim is to list it in AirBnb and earn rental revenue? Additionally, if a traveller wants to find the cheapest listing available but with certain features he prefers like 'free parking' etc, how does he know what aspects to look into to find a suitable listing? There are many factors which influence the price of a listing. Which is why we aim to find the most important factors that affect the price and more importantly the features that is common among the most expensive listings. This will allow an aspiring AirBnb host to ensure that his listing is equipped with those important features such that he will be able to charge a higher price without losing customers. Moreover, a traveller will also know the factors to look into to get the lowest price possible while having certain features he prefers. **The above problem definition can be broken down into 3 sub-problems each one targeting a different aspect of the dataset**. By analysing the dataset with respect to each sub-problem, we can gain useful statistical insights into the data. Afterwards, machine learning techniques and algorithmic optimisation will be performed on the dataset to determine the most important variables that influence the price of an AirBnb listing. Sub-Problem 1: What are the features/facilities/ammenities of a property that affects its price? ###Code # Importing required libraries import pandas as pd import numpy as np import seaborn as sb import matplotlib.pyplot as plt sb.set() from collections import Counter # Importing the listing dataset listingsDF = pd.read_csv('datasets/listings.csv') listingsDF.head() print("Data type : ", type(listingsDF)) print("Data dims : ", listingsDF.shape) ###Output Data type : <class 'pandas.core.frame.DataFrame'> Data dims : (3818, 92) ###Markdown Data Cleaning ###Code # After viewing the multiple columns in the listings.csv from the data_description.txt, # the following variables were picked for further analysis and dropped variables like date_scraped etc. listingDF = listingsDF[['id','name','summary','longitude','latitude','space','description','instant_bookable','neighborhood_overview','neighbourhood_cleansed','host_id','host_name','host_since', 'host_response_time','street', 'zipcode','review_scores_rating','property_type','room_type','accommodates','bathrooms','bedrooms','beds','reviews_per_month','amenities','cancellation_policy','number_of_reviews','price']] listingDF.head() # Replace NaN values with 0 listingDF.fillna(0, inplace=True) # Extract prices from listingDF into priceDF priceDF = listingDF['price'] # Create an empty prices list prices=[] # Convert prices from listingDF into float values and append it in prices list for p in priceDF: p = float(p[1:].replace(',','')) prices.append(p) # Replace the price column in the original listingDF with the new prices listingDF['price'] = prices # Remove listings with 0 for bedrooms, bathrooms, accomodates, price, beds, review_scores_rating, reviews_per_month listingDF = listingDF[listingDF.bedrooms > 0] listingDF = listingDF[listingDF.bathrooms > 0] listingDF = listingDF[listingDF.accommodates > 0] listingDF = listingDF[listingDF.price > 0] listingDF = listingDF[listingDF.beds > 0] listingDF = listingDF[listingDF.review_scores_rating > 0] listingDF = listingDF[listingDF.reviews_per_month > 0] listingDF.head() ###Output _____no_output_____ ###Markdown Analyzing the listings based on room types. It is stated in AirBnB's website that they have 3 room types.Ref: https://www.airbnb.com.sg/help/article/5/what-does-the-room-type-of-a-listing-mean ###Code # Number of room types print("Number of room types :", len(listingDF["room_type"].unique())) print() # Number of listings of each room type print(listingDF["room_type"].value_counts()) sb.catplot(x = "room_type", data = listingDF, kind = "count", palette="Set2") ###Output Number of room types : 3 Entire home/apt 1805 Private room 947 Shared room 91 Name: room_type, dtype: int64 ###Markdown As it can be seen from the countplot, most of the listings are entire home/apt with private rooms being second and shared rooms being the least. Analyzing the listings based on the property type. ###Code # Number of property types print("Number of property types :", len(listingDF["property_type"].unique())) print() # Number of listings of each room type print(listingDF["property_type"].value_counts()) sb.catplot(x = "property_type", data = listingDF, kind = "count", palette="Set2", height = 8, aspect = 2) ###Output Number of property types : 15 House 1403 Apartment 1194 Townhouse 78 Condominium 68 Bed & Breakfast 26 Loft 22 Cabin 17 Other 13 Camper/RV 8 Boat 5 Tent 4 Bungalow 2 Chalet 1 Treehouse 1 Dorm 1 Name: property_type, dtype: int64 ###Markdown From the above graph, we can see that there are a lot more listings of apartment and full houses than any other property type in seattle. Together with the earlier discovery that hosts prefer to list their full property than just a room or shared room, it can be inferred that most listings in Seattle are entire apartments or entire houses. Now lets analyze if these listing types have anything to do with the prices of the listings. Analyzing the prices for the different room and property types. ###Code # Checking out the mean prices for the different room and property types roomProperty_DF = listingDF.groupby(['property_type','room_type']).price.mean() roomProperty_DF = roomProperty_DF.reset_index() roomProperty_DF=roomProperty_DF.sort_values('price',ascending=[0]) roomProperty_DF.head() # Plotting a heatmap of the mean price for room type and a property type plt.figure(figsize = (10,18)) sb.heatmap(listingDF.groupby(['property_type', 'room_type']).price.mean().unstack(), annot=True, fmt=".0f", cmap = sb.cm.rocket_r, cbar_kws={'label': 'mean_price'}) ###Output _____no_output_____ ###Markdown From the above heatmap, with lighter colour representing lower price and darker representing higher price, we can see that shared rooms have the lighest colour hence cheapest. Private rooms have a slightly darker colour so they are in the middle, and entire houses are the darkest thus the most expensive. It is also important to note that the highest number of listings which was house and apartments actually have very similar prices for each of the room_type category.All of this tells us that the room_type and property_type both play a very important role in the final price of the listing. Anaylzing the listings based on the number of bedrooms. ###Code # Plotting a boxplot to quickly see if there is any trend between price and no. bedrooms plt.figure(figsize=(12,12)) sb.boxplot(x='bedrooms', y='price', data=listingDF[['bedrooms', 'price']]) ###Output _____no_output_____ ###Markdown The boxplots above show that indeed there is a trend between bedrooms and price. So, we should anaylse the no. bedrooms further with the property type. ###Code # Creating a number of rooms vs property type dataframe noRoomDF = listingDF[['property_type', 'bedrooms']] noRoomDF.head(n=15) # Plotting a swarmplot to visually see the number of listings for each room_type and the no. bedrooms plt.figure(figsize=(12,12)) sb.swarmplot(x='bedrooms', y='property_type', data=noRoomDF) ###Output _____no_output_____ ###Markdown From the above swarmplot, we can see that generally, the number of listings decreases with higher no. of bedrooms. Additionally, only apartments and houses have more than 3 bedrooms with the exception of the boat house. We will now see the prices of these 2 variables plotted in a single heatmap. ###Code # Plotting a heatmap of prices with number of bedrooms for listings plt.figure(figsize=(12,12)) sb.heatmap(listingDF.groupby(['property_type', 'bedrooms']).price.mean().unstack(),annot=True, fmt=".0f", cmap = sb.cm.rocket_r, cbar_kws={'label': 'mean_price'}) ###Output _____no_output_____ ###Markdown From the above heatmap and boxplots, we can see that unsurprisingly, price of listings increases with number of bedrooms. Only the listing of full house with 7 bedrooms does not follow this trend. To find out why, the number of listings with each number of bedrooms was printed. ###Code # Number of bedrooms print("Number of bedrooms :", len(listingDF["bedrooms"].unique())) print() print("BedRms|Listings") # Number of listings of each room type print(listingDF["bedrooms"].value_counts()) ###Output Number of bedrooms : 7 BedRms|Listings 1.0 1999 2.0 532 3.0 236 4.0 52 5.0 17 6.0 6 7.0 1 Name: bedrooms, dtype: int64 ###Markdown From the table above, we can see that there was only 1 listing with 7 bedrooms. So, it can be seen as an exception (anomaly). So, the general trend is true, price of listing increases with no. bedrooms. So far, we can see that room type, property type and number of bedrooms have some effect on the price of a listing. We will now analyse if any specific ammenity in the property results in higher prices. Analyzing if any particular ammenity results in higher prices. We are going to analyze the textual data of ammenities by finding the words that appear most frequently in ammenities in the most expensive listings. ###Code import nltk from nltk.corpus import stopwords import re # Create a dataframe of the words that appear in the ammenities section of the most expensive listings amenitiesDF = listingDF[['amenities','price','id',]] amenitiesDFTopper = amenitiesDF.sort_values('price',ascending=[0]) amenitiesDFtop=amenitiesDFTopper.head(30) allemenities = '' for index,row in amenitiesDFtop.iterrows(): p = re.sub('[^a-zA-Z]+',' ', row['amenities']) allemenities+=p allemenities_data=nltk.word_tokenize(allemenities) filtered_data=[word for word in allemenities_data if word not in stopwords.words('english')] wnl = nltk.WordNetLemmatizer() allemenities_data=[wnl.lemmatize(data) for data in filtered_data] allemenities_words=' '.join(allemenities_data) from wordcloud import WordCloud, STOPWORDS wordcloud = WordCloud(width = 1000, height = 700, background_color="white").generate(allemenities_words) plt.figure(figsize=(15,15)) plt.imshow(wordcloud) plt.axis("off") plt.show() ###Output _____no_output_____
Random Forest/Random_Forest_Human_Test.ipynb
###Markdown Random Forest Classifier ###Code from sklearn.ensemble import RandomForestClassifier classifier_250 = RandomForestClassifier(n_estimators=100).fit(X_train_250,Y_train_250) print("Accuracy 250: ", classifier_250.score(X_test_250, Y_test_250)) classifier_500 = RandomForestClassifier(n_estimators=100).fit(X_train_500,Y_train_500) print("Accuracy 500: ", classifier_500.score(X_test_500, Y_test_500)) predictions_250 = list(classifier_250.predict(X_test_250)) X_test_250['predictions'] = predictions_250 X_test_250['true_labels'] = Y_test_250 predictions_500 = list(classifier_500.predict(X_test_500)) X_test_500['predictions'] = predictions_500 X_test_500['true_labels'] = Y_test_500 def get_final_output(dataframe): final_output = [] for trajectory in range(test_size): current = dataframe.loc[(dataframe['trajectory_number'] == trajectory) & (dataframe['predictions'] == 1)] if(len(current.index) == 0): picked = dataframe.loc[(dataframe['trajectory_number'] == trajectory) & (dataframe['index'] == 14)] picked_tuple = (picked.price.values[0], int(picked.my_index.values[0])) else: picked = current.iloc[0] picked_tuple = (picked.price, int(picked.my_index)) final_output.append(picked_tuple) return final_output final_output_250 = get_final_output(X_test_250) final_output_500 = get_final_output(X_test_500) final_output = [final_output_250, final_output_500] final_output final_output_file_name = "../Human Experiments/Tests/random_forest" final_file_object = open(final_output_file_name, 'wb') pickle.dump(final_output, final_file_object) final_file_object.close() ###Output _____no_output_____
python-data-structures/interview-goog/count-smaller-after-self.ipynb
###Markdown Count of Smaller Numbers After SelfYou are given an integer array nums and you have to return a new counts array. The counts array has the property where counts[i] is the number of smaller elements to the right of nums[i]. ###Code ''' Examples: [5,2,6,1] -> [2,1,1,0] [-1] -> [0] [-1,-1] -> [0,0] IDEA1: The first aproach is easy. For each element count the number of elements that are smaller then the element itself. Time Complexity O(n^2) ''' def count_smaller(nums: list) -> list: res = [] for i in range(len(nums)): res.append(len([x for x in nums[i+1:] if x < nums[i]])) return res assert count_smaller([-1]) == [0] assert count_smaller([-1, -1]) == [0, 0] assert count_smaller([5,2,6,1]) == [2,1,1,0] ###Output _____no_output_____
webinars/air_transport/Iniciando.ipynb
###Markdown Ciencia de Datos acelerada con GPUs: Iniciando con Blazing SQL y RAPIDSProgramar con GPUs puede ser intimidante. En el pasado se necesitaba sólidos conocimientos en C++ y CUDA, y la capacidad de pensar *en paralelo*. Hoy en día, con [RAPIDS](https://rapids.ai) y [BlazingSQL](https://blazingsql.com), puedes iniciar usando el gran poder de GPUs ya mismo, con los mínimos cambios en el código: ya sea que uses herramientas del ecosistema PyData, como pandas o Scikit-Learn, o te sea más familiar SQL, RAPIDS y BlazingSQL, podrás lograr aceleraciones increíbles, gracias al uso de GPU. ImportarPrimero, vamos a importar las herramientas que usaremos. ###Code import cudf import blazingsql as bsql import s3fs import numpy as np from collections import OrderedDict from IPython.display import HTML from bokeh.io import output_file, show from bokeh.models import ColumnDataSource, GMapOptions, LabelSet from bokeh.plotting import gmap ###Output _____no_output_____ ###Markdown `BlazingContext`Es necesario que establezcas a `BlazingContext` para conectarte a la instancia BlazingSQL, y así crear tablas, ejecutar consultas y básicamente, hacer cualquier cosa con BlazingSQL. ###Code bc = bsql.BlazingContext() ###Output BlazingContext ready ###Markdown El `BlazingContext` funciona como *entrypoint* para todo. En esta isntancia en particular iniciamos el `BlazingContext` con parámetros por default, sin embargo, hay muchas maneras de configurarlo y expandir sus capacidades.|Argument|Required|Description|Defaults||:-------|:------:|:----------|-------:|allocator| No|Las opciones son: "default", "managed". Cuando figura "managed" usa Unified Virtual Memory (UVM) y puede usar memoria del sistema si la memoria del GPU se agota, o "existing" cuando asumimos que ya está configurado el rmm allocator y por ello no se inicializa (esto para usuarios avanzados).|"managed"dask_client|No|El cliente Dask es usado para comunicación con otros nodos. Esto es sólo necesario para ejecutar BlazingSQL con múltiples nodos.|Noneenable_logging|No|Si figura en True, el memory allocator logging estará activo, pero puede impactar de forma negativa en la performance. Esto es para usuarios avanzados.|Falseinitial_pool_size|No|Tamaño inicial de memory pool en bytes (si pool=True). Si no, estará en default para usar la mitad de la memoria de GPU.|Nonepool|No|Si es True, If True, se asigna la memory pool en el inicio. Esto puede mejorar considerablemente el performance.|Falsenetwork_interface|No|Interface de red usada para comunicarse con el dask-scheduler. Mira la nota debajo.|'eth0'config_options|No|Un diccionario para configurar ciertos parámetros en el motor.| Ingerir y ejecutar dataHay dos maneras para cargar y ejecutar data usando las herramientas del ecosistema de RAPIDS: cargar directamente a la memoria usando `cudf` o `.create_table()` usando `BlazingContext`. Data de vuelos ###Code flight_data_path = 's3://bsql/data/air_transport/flight_ontime_2020-0[1-5].parquet' s3 = s3fs.S3FileSystem(anon=True) files = [f's3://{f}' for f in s3.glob(flight_data_path)] files ###Output _____no_output_____ ###Markdown cuDF ###Code %%time flights = [] for f in files: flights.append(cudf.read_parquet(f, storage_options={'anon': True})) flights = cudf.concat(flights) flights.head(5) print(f'Número total de vuelos en el dataset: {len(flights):,}') ###Output Número total de vuelos en el dataset: 2,508,583 ###Markdown BlazingSQL ###Code _ = bc.s3( 'bsql' , bucket_name = 'bsql' ) bc.create_table('air_transport', files) %%time bc.sql('SELECT * FROM air_transport LIMIT 5') print(f'Número total de vuelos en el dataset: {bc.sql("SELECT COUNT(*) AS CNT FROM air_transport")["CNT"].iloc[0]:,}') ###Output Número total de vuelos en el dataset: 2,508,583 ###Markdown Columnas y tipos de data ###Code flights.columns flights.dtypes ###Output _____no_output_____ ###Markdown El `BlazingContext` retorna un objeto cuDF DataFrame, por lo que tenemos acceso al mismo API! ###Code bc_df = bc.sql('SELECT * FROM air_transport LIMIT 5') type(bc_df) bc_df.columns bc_df.dtypes ###Output _____no_output_____ ###Markdown Data de vuelos y aeropuertos ###Code airports_path = 's3://bsql/data/air_transport/airports.csv' airlines_path = 's3://bsql/data/air_transport/airlines.csv' airports_dtypes = OrderedDict([ ('Airport ID', 'int64') , ('Name', 'str') , ('City', 'str') , ('Country', 'str') , ('IATA', 'str') , ('ICAO', 'str') , ('Latitude', 'float64') , ('Longitude', 'float64') , ('Altitude', 'int64') , ('Timezone', 'str') , ('DST', 'str') , ('Type', 'str') , ('Source', 'str') ]) airports = cudf.read_csv( airports_path , names=list(airports_dtypes.keys()) , dtype=list(airports_dtypes.values()) , storage_options={'anon': True} ) airports.head() airlines_dtypes = OrderedDict([ ('Airline ID', 'int64') , ('Name', 'str') , ('Alias', 'str') , ('IATA', 'str') , ('ICAO', 'str') , ('Callsign', 'str') , ('Country', 'str') , ('Active', 'str') ]) airlines = cudf.read_csv( airlines_path , names=list(airlines_dtypes.keys()) , dtype=list(airlines_dtypes.values()) , storage_options={'anon': True} ) airlines.head() ###Output _____no_output_____ ###Markdown Puedes crear tablas BlazingSQL directamente desde cuDF DataFrames. ###Code bc.create_table('airports', airports) bc.create_table('airlines', airlines) ###Output _____no_output_____ ###Markdown Y ahora, podemos consultar y unir estos datasets. ###Code %%time bc.sql(''' SELECT A.FL_DATE , A.OP_UNIQUE_CARRIER , B.Name AS CARRIER_NAME , A.ORIGIN , C.Name AS ORIGIN_NAME , C.City AS ORIGIN_CITY , A.DEST , D.Name AS DEST_NAME , D.City AS DEST_CITY FROM air_transport AS A LEFT OUTER JOIN airlines AS B ON A.OP_UNIQUE_CARRIER = B.IATA LEFT OUTER JOIN airports AS C ON A.ORIGIN = C.IATA LEFT OUTER JOIN airports AS D ON A.DEST = D.IATA LIMIT 4 ''') ###Output CPU times: user 1.43 s, sys: 105 ms, total: 1.53 s Wall time: 1.23 s ###Markdown Lo hermoso de este ecosistema, y particualrmente de BlazingSQL, es la inter-operatividad con RAPIDS: podemos crear tablas desde cudf y cualquier formato soportado por cuDF, ya sea local o remoto; podemos registrar buckets desde `s3`, `gcp` con el `BlazingContext` y con soporte para Azure en futuros releases. Entonces, de forma sencilla, podemos crear tablas directamente desde archivos y escribir código que retorne un DataFrame cuDF uniendo Parquet y archivos CSV en sólo un par de líneas! ###Code bc.create_table('airports_table', airports_path, names=list(airports_dtypes.keys()), dtype=list(airports_dtypes.values())) bc.create_table('airlines_table', airlines_path, names=list(airlines_dtypes.keys()), dtype=list(airlines_dtypes.values())) %%time bc.sql(''' SELECT A.FL_DATE , A.OP_UNIQUE_CARRIER , B.Name AS CARRIER_NAME , A.ORIGIN , C.Name AS ORIGIN_NAME , C.City AS ORIGIN_CITY , A.DEST , D.Name AS DEST_NAME , D.City AS DEST_CITY FROM air_transport AS A // READING FROM PARQUET LEFT OUTER JOIN airlines AS B ON A.OP_UNIQUE_CARRIER = B.IATA LEFT OUTER JOIN airports_table AS C // READING FROM CSV ON A.ORIGIN = C.IATA LEFT OUTER JOIN airports_table AS D // READING FROM CSV ON A.DEST = D.IATA LIMIT 4 ''') %%time ( flights[['FL_DATE', 'OP_UNIQUE_CARRIER', 'ORIGIN', 'DEST']] .merge(airlines[['IATA', 'Name']], left_on='OP_UNIQUE_CARRIER', right_on='IATA') .rename(columns={'Name': 'CARRIER_NAME'}) .drop(columns=['IATA']) .merge(airports[['IATA', 'Name', 'City']], left_on='ORIGIN', right_on='IATA') .rename(columns={'Name': 'ORIGIN_NAME', 'City': 'ORIGIN_CITY'}) .drop(columns=['IATA']) .merge(airports[['IATA', 'Name', 'City']], left_on='DEST', right_on='IATA') .rename(columns={'Name': 'DEST_NAME', 'City': 'DEST_CITY'}) .drop(columns=['IATA']) ).head() ###Output CPU times: user 400 ms, sys: 88.5 ms, total: 488 ms Wall time: 491 ms ###Markdown Preguntas 1. Cuántos aeropuertos hay en el dataset? ###Code print(f'Hay {len(flights["ORIGIN"].unique())} aeropuertos en el dataset') print(f'Hay {bc.sql("SELECT COUNT(DISTINCT ORIGIN) AS CNT FROM air_transport")["CNT"][0]} aeropuertos en el dataset') ###Output Hay 371 aeropuertos en el dataset ###Markdown 2. Cuántos vuelos tuvieron retraso y cuántos partieron a tiempo? Cuál es la distribución? ###Code print(f'{len(flights[flights["DEP_DELAY"] > 0]):,} vuelos con retraso y {len(flights[flights["DEP_DELAY"] <= 0]):,} vuelos a tiempo') ### calculando la distribución n_bins = 100 delays = flights[flights['DEP_DELAY'] > 0]['DEP_DELAY'] ontime = flights[flights['DEP_DELAY'] <= 0]['DEP_DELAY'] %%time del_bins = np.array([i * 15 for i in range(0, n_bins)], dtype='float64') delays_binned = delays.digitize(del_bins) delays_histogram = delays_binned.groupby().count() / len(delays) ( delays_histogram .set_index(del_bins[delays_histogram.index.to_array()-1]) .to_pandas() .plot(kind='bar', figsize=(20,9), ylim=[0,1.0], title='Distribución de salidas con demora') ) %%time ontime_bins = np.array([i * (-1) for i in range(n_bins,0,-1)], dtype='float64') ontime_binned = ontime.digitize(ontime_bins) ontime_histogram = ontime_binned.groupby().count() / len(ontime) ( ontime_histogram .set_index(ontime_bins[ontime_histogram.index.to_array()-1]) .to_pandas() .plot(kind='bar', figsize=(20,9), ylim=[0,1.0], title='Distribución de salidas a tiempo') ) ###Output CPU times: user 153 ms, sys: 3.99 ms, total: 157 ms Wall time: 156 ms ###Markdown 3. Cuáles son las top 10 aerolíneas y aeropuertos con mayores retrasos en por lo menos 1000 vuelos? Cuál es el promedio de demora? ###Code delays = flights[flights['DEP_DELAY'] > 0][['DEP_DELAY', 'ORIGIN', 'DEST', 'OP_UNIQUE_CARRIER']] ontime = flights[flights['DEP_DELAY'] <= 0][['DEP_DELAY', 'ORIGIN', 'DEST', 'OP_UNIQUE_CARRIER']] bc.create_table('delays', delays) bc.create_table('ontime', ontime) ###Output _____no_output_____ ###Markdown Los que presentaron mayores retrasos ###Code %%time bc.sql(''' SELECT A.ORIGIN , B.Name AS ORIGIN_Airport , B.City AS ORIGIN_City , B.Country AS ORIGIN_Country , COUNT(*) AS DELAY_CNT , AVG(DEP_DELAY) AS AVG_DELAY FROM delays AS A LEFT OUTER JOIN airports AS B ON A.ORIGIN = B.IATA GROUP BY A.ORIGIN , B.Name , B.City , B.Country HAVING COUNT(*) > 1000 ORDER BY AVG(DEP_DELAY) DESC LIMIT 10 ''') %%time bc.sql(''' SELECT A.DEST , B.Name AS DEST_Airport , B.City AS DEST_City , B.Country AS DEST_Country , COUNT(*) AS DELAY_CNT , AVG(DEP_DELAY) AS AVG_DELAY FROM delays AS A LEFT OUTER JOIN airports AS B ON A.DEST = B.IATA GROUP BY A.DEST , B.Name , B.City , B.Country HAVING COUNT(*) > 1000 ORDER BY AVG(DEP_DELAY) DESC LIMIT 10 ''') %%time bc.sql(''' SELECT A.OP_UNIQUE_CARRIER AS CARRIER , B.Name AS CARRIER_Name , B.Country AS CARRIER_Country , COUNT(*) AS DELAY_CNT , AVG(DEP_DELAY) AS AVG_DELAY FROM delays AS A LEFT OUTER JOIN airlines AS B ON A.OP_UNIQUE_CARRIER = B.IATA GROUP BY A.OP_UNIQUE_CARRIER , B.Name , B.Country HAVING COUNT(*) > 1000 ORDER BY AVG(DEP_DELAY) DESC LIMIT 10 ''') ###Output CPU times: user 190 ms, sys: 4.92 ms, total: 195 ms Wall time: 138 ms ###Markdown Los más puntuales ###Code %%time bc.sql(''' SELECT A.ORIGIN , B.Name AS ORIGIN_Airport , B.City AS ORIGIN_City , B.Country AS ORIGIN_Country , COUNT(*) AS ONTIME_CNT , AVG(DEP_DELAY) AS AVG_ONTIME FROM ontime AS A LEFT OUTER JOIN airports AS B ON A.ORIGIN = B.IATA GROUP BY A.ORIGIN , B.Name , B.City , B.Country HAVING COUNT(*) > 1000 ORDER BY AVG(DEP_DELAY) DESC LIMIT 10 ''') %%time bc.sql(''' SELECT A.DEST , B.Name AS DEST_Airport , B.City AS DEST_City , B.Country AS DEST_Country , COUNT(*) AS ONTIME_CNT , AVG(DEP_DELAY) AS AVG_ONTIME FROM ontime AS A LEFT OUTER JOIN airports AS B ON A.DEST = B.IATA GROUP BY A.DEST , B.Name , B.City , B.Country HAVING COUNT(*) > 1000 ORDER BY AVG(DEP_DELAY) DESC LIMIT 10 ''') %%time bc.sql(''' SELECT A.OP_UNIQUE_CARRIER AS CARRIER , B.Name AS CARRIER_Name , B.Country AS CARRIER_Country , AVG(DEP_DELAY) AS AVG_ONTIME FROM ontime AS A LEFT OUTER JOIN airlines AS B ON A.OP_UNIQUE_CARRIER = B.IATA GROUP BY A.OP_UNIQUE_CARRIER , B.Name , B.Country HAVING COUNT(*) > 1000 ORDER BY AVG(DEP_DELAY) DESC LIMIT 10 ''') ###Output CPU times: user 472 ms, sys: 20 ms, total: 492 ms Wall time: 375 ms ###Markdown Dates, strings, oh my...Un error común es creer que usar GPUs es sólo útil para cálculos numéricos. Sin embargo, con RAPIDS y BlazingSQL puedes realizar operaciones en fechas y strings fácilmente y a la velocidad de GPUs! Vuelos por mes y día de la semanaAunque ya contamos con columnas como `YEAR` o `MONTH`, vamos a calcular estos valores nosotros mismos! ###Code %%time flights['FL_DATE'] = flights['FL_DATE'].astype('datetime64[ms]') dated = flights[['FL_DATE', 'OP_UNIQUE_CARRIER']] dated['YEAR'] = dated['FL_DATE'].dt.year dated['MONTH'] = dated['FL_DATE'].dt.month dated['DAY'] = dated['FL_DATE'].dt.day dated['DOW'] = dated['FL_DATE'].dt.dayofweek %%time ( dated .groupby(['YEAR','MONTH']) .agg({'FL_DATE': 'count'}) .to_pandas() .plot(kind='bar', figsize=(12,9), title='Total de vuelos por mes') ) %%time ( dated .groupby(['MONTH','DAY', 'DOW']) .agg({'FL_DATE': 'count'}) .reset_index() .groupby(['DOW']) .agg({'FL_DATE': 'mean'}) .to_pandas() .plot(kind='bar', figsize=(12,9), title='Promedio de vuelos por día de semana') ) ###Output CPU times: user 58.1 ms, sys: 176 µs, total: 58.2 ms Wall time: 56.9 ms ###Markdown Aeropuertos internacionalesVamos a buscar el número total de aeropuertos que contenga la palabra 'International'. ###Code airports.head() international_airports = airports[['Name', 'City', 'Country', 'IATA', 'Latitude', 'Longitude']] international_airports['International'] = international_airports['Name'].str.extract('(International)') international_airports.dropna(inplace=True) print(f'Número total de aeropuertos que contengan la palabra "International": {len(international_airports)}') ###Output Número total de aeropuertos que contengan la palabra "International": 898 ###Markdown IMPORTANT: Before you run the cell below you will need to create an API key for Google Maps. Follow the instructions here and create an API Key for Google Maps Javascript API. ###Code output_file("international.html") map_options = GMapOptions(lat=30.2861, lng=-97.7394, map_type="roadmap", zoom=4) p = gmap("AIzaSyATG7aLmLbjrD8dRopJ8GlpZBq6ya0vrl8", map_options, title="International") source_dict = international_airports[['Latitude', 'Longitude', 'Name']].to_pandas().to_dict('list') source = ColumnDataSource(data=source_dict) p.circle(x="Longitude", y="Latitude", size=10, fill_color="blue", fill_alpha=0.8, source=source) labels = LabelSet(x="Longitude", y="Latitude", text='Name', level='glyph', x_offset=5, y_offset=5, source=source, render_mode='canvas') show(p) HTML(open('international.html', 'r').read()) ###Output _____no_output_____
prophet_functions.ipynb
###Markdown ###Code # !pip install utils # import utils # Conectar el notebook con la cuenta de gdrive from google.colab import drive drive.mount('/content/drive/', force_remount=False) BASE_FOLDER = 'drive/My Drive/TFM/resources/' import pandas as pd import numpy as np from fbprophet import Prophet from fbprophet.plot import plot_plotly, plot_components_plotly from fbprophet.diagnostics import cross_validation from fbprophet.diagnostics import performance_metrics from fbprophet.plot import plot_cross_validation_metric %run 'drive/My Drive/TFM/normalize_data_functions'.ipynb %run 'drive/My Drive/TFM/aemet'.ipynb # def prophet_prediction(df, periods_number, options): # m = Prophet() # if options and options["holidays"]: # m.add_country_holidays(country_name='Spain') # m.fit(df) # future = m.make_future_dataframe(periods=periods_number) # future.tail() # forecast = m.predict(future) # print(forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()) # fig1 = m.plot(forecast) # fig2 = m.plot_components(forecast) # # fig3 = plot_plotly(m, forecast) # return forecast def prophet_prediction(df, periods_number, options): yearly_order = options["yearly_order"] if options and options["yearly_order"] else 10 weekly_order = options["weekly_order"] if options and options["weekly_order"] else 10 m = Prophet(weekly_seasonality=weekly_order, yearly_seasonality = yearly_order) if options and options["holidays"]: m.add_country_holidays(country_name='Spain') df['on_workday'] = df['ds'].apply(is_workday) df['off_workday'] = ~df['ds'].apply(is_workday) m.add_seasonality(name='weekly_on_workday', period=7, fourier_order=10, condition_name='on_workday') m.add_seasonality(name='weekly_off_workday', period=7, fourier_order=3, condition_name='off_workday') m.fit(df) future = m.make_future_dataframe(periods=periods_number) future.tail() future['on_workday'] = future['ds'].apply(is_workday) future['off_workday'] = ~future['ds'].apply(is_workday) forecast = m.predict(future) print(forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()) if options and options["cross_validations"]: df_cv = cross_validation(m, initial='730 days', period='180 days', horizon = '365 days', parallel="processes") # df_cv = cross_validation(m, horizon = '365 days', parallel="processes") df_p = performance_metrics(df_cv) print(df_p) if options["figs_cross_validations"]: for fig in options["figs_cross_validations"]: plot_cross_validation_metric(df_cv, metric = fig) if options and options["plot_figures"]: fig1 = m.plot(forecast) fig2 = m.plot_components(forecast) # fig3 = plot_plotly(m, forecast) return forecast def predict_one_magnitude(file_name, magnitude, periods, options): df = prophet_data_normalized_one_magnitude(file_name, magnitude) forecast = prophet_prediction(df, periods, options) return {'forecast': forecast, 'df': df} def is_workday(ds): date = pd.to_datetime(ds) return (date.dayofweek > 4) def prophet_prediction_with_regresors(df, periods_number, regressors, future_aemet, options): yearly_order = options["yearly_order"] if options and options["yearly_order"] else 10 weekly_order = options["weekly_order"] if options and options["weekly_order"] else 10 # m = Prophet(weekly_seasonality=weekly_order, yearly_seasonality = yearly_order) m = Prophet(weekly_seasonality=False, yearly_seasonality = yearly_order) if options and options["holidays"]: m.add_country_holidays(country_name='Spain') for regressor in regressors: # m.add_regressor(regressor, prior_scale=0.5, mode='multiplicative') # m.add_regressor(regressor, prior_scale=2) m.add_regressor(regressor) df['on_workday'] = df['ds'].apply(is_workday) df['off_workday'] = ~df['ds'].apply(is_workday) m.add_seasonality(name='weekly_on_workday', period=7, fourier_order=10, condition_name='on_workday') m.add_seasonality(name='weekly_off_workday', period=7, fourier_order=3, condition_name='off_workday') # print(df) m.fit(df) # Provisional # print('porvisional') future_aemet['on_workday'] = future_aemet['ds'].apply(is_workday) future_aemet['off_workday'] = ~future_aemet['ds'].apply(is_workday) # print(future_aemet) forecast = m.predict(future_aemet) print(forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']]) if options and options["cross_validations"]: # df_cv = cross_validation(m, initial='730 days', period='180 days', horizon = '365 days', parallel="processes") df_cv = cross_validation(m, horizon = '365 days', parallel="processes") df_p = performance_metrics(df_cv) print(df_p) if options["figs_cross_validations"]: for fig in options["figs_cross_validations"]: plot_cross_validation_metric(df_cv, metric = fig) if options and options["plot_figures"]: fig1 = m.plot(forecast) fig2 = m.plot_components(forecast) # fig3 = plot_plotly(m, forecast) return forecast def predict_one_magnitude_with_aemet(file_name_magnitude, magnitude, file_name_aemet, columns_to_filter, periods_number, options): df_magnitude = prophet_data_normalized_one_magnitude(file_name_magnitude, magnitude) df_aemet = normalize_aemet_data(file_name_aemet, columns_to_filter) df_merged = pd.merge(df_magnitude, df_aemet, on='ds') future_aemet = filter_aemet_data_by_date(df_aemet, df_merged['ds'][0], '2020-01-07', columns_to_filter) table = filter_columns(df_merged, ['ds', 'y']) forecast = prophet_prediction_with_regresors(df_merged, periods_number, columns_to_filter, future_aemet, options) return {'forecast': forecast, 'df_merged': df_merged} ###Output _____no_output_____
Spark_and_Python_For_Big_Data_with_PySpark/04-Spark_for_Machine_Learning/5-Collaborative_Filtering_for_Recommender_Systems/.ipynb_checkpoints/Consulting Project - Recommender Systems-checkpoint.ipynb
###Markdown Consulting Project Recommender Systems - Solutions The whole world seems to be hearing about your new amazing abilities to analyze big data and build useful systems for them! You've just taken up a new contract with a new online food delivery company. This company is trying to differentiate itself by recommending new meals to customers based off of other customers likings.Can you build them a recommendation system?Your final result should be in the form of a function that can take in a Spark DataFrame of a single customer's ratings for various meals and output their top 3 suggested meals. For example:Best of luck!** *Note from Jose: I completely made up this food data, so its likely that the actual recommendations themselves won't make any sense. But you should get a similar output to what I did given the example customer dataframe* ** ###Code import pandas as pd df = pd.read_csv('movielens_ratings.csv') df.describe().transpose() df.corr() import numpy as np df['mealskew'] = df['movieId'].apply(lambda id: np.nan if id > 31 else id) df.describe().transpose() mealmap = { 2. : "Chicken Curry", 3. : "Spicy Chicken Nuggest", 5. : "Hamburger", 9. : "Taco Surprise", 11. : "Meatloaf", 12. : "Ceaser Salad", 15. : "BBQ Ribs", 17. : "Sushi Plate", 19. : "Cheesesteak Sandwhich", 21. : "Lasagna", 23. : "Orange Chicken", 26. : "Spicy Beef Plate", 27. : "Salmon with Mashed Potatoes", 28. : "Penne Tomatoe Pasta", 29. : "Pork Sliders", 30. : "Vietnamese Sandwich", 31. : "Chicken Wrap", np.nan: "Cowboy Burger", 4. : "Pretzels and Cheese Plate", 6. : "Spicy Pork Sliders", 13. : "Mandarin Chicken PLate", 14. : "Kung Pao Chicken", 16. : "Fried Rice Plate", 8. : "Chicken Chow Mein", 10. : "Roasted Eggplant ", 18. : "Pepperoni Pizza", 22. : "Pulled Pork Plate", 0. : "Cheese Pizza", 1. : "Burrito", 7. : "Nachos", 24. : "Chili", 20. : "Southwest Salad", 25.: "Roast Beef Sandwich"} df['meal_name'] = df['mealskew'].map(mealmap) df.to_csv('Meal_Info.csv',index=False) from pyspark.sql import SparkSession spark = SparkSession.builder.appName('recconsulting').getOrCreate() from pyspark.ml.evaluation import RegressionEvaluator from pyspark.ml.recommendation import ALS data = spark.read.csv('Meal_Info.csv',inferSchema=True,header=True) (training, test) = data.randomSplit([0.8, 0.2]) # Build the recommendation model using ALS on the training data als = ALS(maxIter=5, regParam=0.01, userCol="userId", itemCol="mealskew", ratingCol="rating") model = als.fit(training) # Evaluate the model by computing the RMSE on the test data predictions = model.transform(test) predictions.show() evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",predictionCol="prediction") rmse = evaluator.evaluate(predictions) print("Root-mean-square error = " + str(rmse)) ###Output _____no_output_____
06-conditional-prob/intro.ipynb
###Markdown Dependence and IndependenceDifferent random phenomena often depend on each other in obvious or subtle ways. *Conditional probability* is the study of such dependence. This is one of the most important concepts in probability. In fact, it is often said that “all probabilities are conditional”, which means that we should always understand probability through the concepts taught in this chapter. Some examples of how we can use conditional probabilities are:* In null hypothesis testing, we are asking whether an observed difference in a summary statistic **depends** on the underlying grouping of the data. This dependence can be best expressed in terms of conditional probabilities and conditional expectations.* In wireless communications, we receive a noisy signal and need to determine which information symbol was transmitted. The noisy signal **depends** on the transmitted information, and thus we can use tools from conditional probability to make optimal decisions about the transmitted symbol.* In compound experiments, the later experiments often **depend** on the results of the earlier experiments. For instance, consider drawing cards from a deck. If the first card is an ace, then the probability that second card will be an ace is different than if the first card had not been an ace.In this chapter, I will introduce tools for working with conditional probability. These tools will often help us answer questions that are challenging to understand without conditional probability. Below I give a few quick examples of simple problems that people often find challenging to answer correctly: $\mbox{ }$ ###Code from jupyterquiz import display_quiz git="https://raw.githubusercontent.com/jmshea/Foundations-of-Data-Science-with-Python/main/questions/" ###Output _____no_output_____ ###Markdown **Balls and Boxes/Urns**A class set of probability problems involves pulling colored balls from an urn. I have found that many students are not familiar with the word *urn* (which is like a large vase), I will use a box instead. To make the problem interesting, the box must contain more than two balls, and the balls must be of at least two different colors. Below I ask some questions about a box containing three balls, of which two are white and one is black. ###Code display_quiz(git+'ch5-intro.json') ###Output _____no_output_____ ###Markdown If you are surprised or confused by the answers, don't worry. That is normal, and even after we solve these in detail, it may still take you some time to build intuition and deeper understanding of these problems. **The Monty Hall Problem**Some readers may be familiar with this problem. However, even if you are familiar with the problem, you may still not understand the mathematics behind solving it. This problem became famous when it appeared in the "Ask Marilyn" column of *Parade* magazine, which was distributed with many Sunday newspapers. In this column, Marilyn vos Savant answered tricky questions, and she provided an answer to this problem that provoked a lot of surprise and correspondence. The problem is based on an old TV game show called *Let's Make a Deal*, which was hosted by Monty Hall. (The problem setup varies somewhat from how the actual TV game worked.) Here is a slightly paraphrased version of the problem from Parade magazine:You are on a game show, and you're given the choice of three doors:* Behind one door is a car* Behind the other doors are goatsYou pick a door, and the host, who knows what's behind the doors, opens another door, which he knows has a goat.The host then offers you the option to switch doors. Does it matter if you switch? If switching changes your probability of getting the prize, what is the new probability? ###Code display_quiz(git+'ch5-monty-hall.json') ###Output _____no_output_____
Tarek AbdElRahman&Mostafa Kamal - Project4.ipynb
###Markdown Step 1: Read the data (the json files and metadata) Read the metadata file ###Code metadata = spark.read\ .format("csv")\ .option("header", "true")\ .load(home+"/Dataset/metadata.csv") ###Output _____no_output_____ ###Markdown We begin by randomly sampling 10,000 papers from the pdf_json folder. We can do this through the following linux command, utilizing the `shuf` command`~  shuf -zn10000 -e document_parses/pdf_json/* | xargs -0 cp -vt random_sample/` Read the sample ###Code papers = spark.read\ .format("json")\ .option("multiLine", "true")\ .load(home+"/Dataset/random_sample/") ###Output _____no_output_____ ###Markdown Step 2: Explore the data From the metadata exploration we found that many columns has nulls and some are only nulls ###Code metadata.printSchema() metadata.show(5) from pyspark.sql.functions import isnan, when, count, col metadata.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in metadata.columns]).show() ###Output +--------+-----+--------+-----+-----+-----+---------+-------+--------+------------+-------+-------+------+----------------+--------+--------------+--------------+-----+-----+ |cord_uid| sha|source_x|title| doi|pmcid|pubmed_id|license|abstract|publish_time|authors|journal|mag_id|who_covidence_id|arxiv_id|pdf_json_files|pmc_json_files| url|s2_id| +--------+-----+--------+-----+-----+-----+---------+-------+--------+------------+-------+-------+------+----------------+--------+--------------+--------------+-----+-----+ | 0|77267| 0| 30|29459|71159| 32790| 41| 29300| 44| 4995| 6651|133300| 113503| 132417| 75636| 87526|11453|28314| +--------+-----+--------+-----+-----+-----+---------+-------+--------+------------+-------+-------+------+----------------+--------+--------------+--------------+-----+-----+ ###Markdown The json files schema is available in the dataset where we found all the info we need ###Code papers.show(10) ###Output +--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+ | abstract| back_matter| bib_entries| body_text| metadata| paper_id| ref_entries| +--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+ | []|[[[[48,, 44, 3.37...|[, [[[H, Adams, [...|[[[], [[79, TABRE...|[[[[,,], , , Bert...|2b1cbb43a4f06e232...|[[, Chest Wall Th...| |[[[], [], Abstrac...|[[[], [], acknowl...|[[[], , [,,,], , ...|[[[[1203,, 1200, ...|[[[[,,], , ;, Bey...|c9f4a4df803f6496b...|[[, Figure 2, fig...| | []|[[[[1609,, 1586, ...|[[[[T, Aaberg, [M...|[[[[505,, 497, Ka...| [[], ]|5c6c26e79c0824645...|[,, [, Technische...| | []|[[[], [], annex, ...|[[[[Organizationa...|[[[], [], , LEARN...|[[[[University of...|38c0691ee76fb1e66...|[[, A 40 YEAR OLD...| | []|[[[], [], acknowl...|[[[], , [,,,], , ...|[[[[234,, 231, (1...|[[[[,,], , C, Dob...|67c8d389c28ceef26...|[[, (containing P...| | []| []|[[[], , [,,,], , ...|[[[[24,, 19, July...| [[], ]|8c93a69ce8eb2ba08...|[[, . 2013;62:540...| |[[[], [], Abstrac...|[[[], [[10,, 1, (...|[[[], , [,,,], , ...|[[[], [], Univers...|[[[[Bloodworks NW...|e3b74d02ad582540a...|[[, BP7: RBC in-v...| | []| []|[[[[K, Aas, [], ]...|[[[], [], The Imm...| [[], ]|7e62ae259b327bda2...|[[, Fig.1.2. (Con...| | []|[[[], [], acknowl...|[[[[V, Moura, [R]...|[[[[199,, 195, (P...|[[[[,,], , ,, Kra...|a78f60c67c8ac17c7...|[[, Thomas G. Bro...| |[[[], [], Abstrac...|[[[[808, BIBREF33...|[[[[A, Caudy, [A]...|[[[], [], Backgro...|[[[[Academy of Sc...|c60cc22e04f138a79...|[[, Argonaute pro...| +--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+--------------------+ only showing top 10 rows ###Markdown Now we need to process the json files and get the info we need in a simple structured dataframe Define a function to gather all paragraphs in one text and gather abstract paragraphs and body paragraphs ###Code from pyspark.sql.functions import udf def concatenate_text(j): txt_all = "" for a in j: txt_all = txt_all + " " + a['text'] return txt_all udf_concatenate_text = udf(concatenate_text) papers = papers.select(papers['Paper_ID'], papers['metadata']['title'], udf_concatenate_text(papers['abstract']), udf_concatenate_text(papers['body_text'])) papers.printSchema() papers = papers.withColumnRenamed('metadata.title', 'title')\ .withColumnRenamed('concatenate_text(abstract)', 'abstract')\ .withColumnRenamed('concatenate_text(body_text)', 'body') papers.show(10) ###Output +--------------------+--------------------+--------------------+--------------------+ | Paper_ID| title| abstract| body| +--------------------+--------------------+--------------------+--------------------+ |2b1cbb43a4f06e232...|Level 3 guideline...| | S3-guideline on ...| |c9f4a4df803f6496b...|Physicians Poster...| Background: Prom...| with Thiotepa 5 ...| |5c6c26e79c0824645...| | | Der Weltgesundhe...| |38c0691ee76fb1e66...|Society of Genera...| | LEARNING OBJECTI...| |67c8d389c28ceef26...| Diedrich (1)| | The presence of ...| |8c93a69ce8eb2ba08...| | | July-December 20...| |e3b74d02ad582540a...|P1-A03A IgG Subty...| Background/Case ...| Background/Case ...| |7e62ae259b327bda2...| | | Definitions. The...| |a78f60c67c8ac17c7...|Alterations in 14...| | Objective: Using...| |c60cc22e04f138a79...|EXTERNAL SCIENTIF...| This report is t...| This report is a...| +--------------------+--------------------+--------------------+--------------------+ only showing top 10 rows ###Markdown Seems that some fields have empty values, let's explore that ###Code from pyspark.sql.functions import isnan, when, count, col papers.select([count(when(col(c) == "", c)).alias(c) for c in papers.columns]).show() ###Output +--------+-----+--------+----+ |Paper_ID|title|abstract|body| +--------+-----+--------+----+ | 0| 1060| 2927| 0| +--------+-----+--------+----+ ###Markdown Step 3: Prepare and process the data Now join the papers dataframe the metadata and get only the columns that we might use as features ###Code papers_meta = papers.join(metadata, papers['Paper_ID'] == metadata['sha'], how='left_outer')\ .select(papers['Paper_ID'], papers['title'], papers['body'], metadata['publish_time'],\ metadata['authors'], metadata['journal']) from pyspark.sql.functions import year, month, to_date papers_meta = papers_meta.withColumn("publish_Year", year(to_date("publish_time")))\ .withColumn("publish_Month", month(to_date("publish_time"))) papers_meta.show(5) ###Output +--------------------+--------------------+--------------------+------------+--------------------+--------------------+------------+-------------+ | Paper_ID| title| body|publish_time| authors| journal|publish_Year|publish_Month| +--------------------+--------------------+--------------------+------------+--------------------+--------------------+------------+-------------+ |0782b4fb23ca65baf...|The population ge...| The model will b...| 2015-11-30|Tibayrenc, Michel...| Acta Tropica| 2015| 11| |14d8fd027f39c8311...|Drug-Induced Dela...| Drug-induced del...| 2015-05-15|Klimas, Natasha; ...|Cutaneous Drug Er...| 2015| 5| |1e5228e3f0658479a...|Canonicalizing Kn...| User-generated c...| 2020-04-17|Fatma, Nausheen; ...|Advances in Knowl...| 2020| 4| |1e663ac169e08ea02...|The effects of a ...| Evidence has bee...| 2006-07-31|Yip, Paul S.F.; F...|Journal of Affect...| 2006| 7| |1f26b5e8291ea1ddc...|The Case for Labo...| The field of pat...| 2017-07-16|Kaul, Karen L.; S...| Acad Pathol| 2017| 7| +--------------------+--------------------+--------------------+------------+--------------------+--------------------+------------+-------------+ only showing top 5 rows ###Markdown Define a function to detect the paper language and filter out non-english papers ###Code from langdetect import detect def detect_lang(txt): try: return detect(txt) except: return None udf_detect_lang = udf(detect_lang) papers_meta = papers_meta.withColumn('Lang', udf_detect_lang(papers_meta['body'])) ## some papers are indeed non English papers_meta.select("*").where("Lang<>'en'").show(5) papers_meta.count() papers_meta = papers_meta.filter(papers_meta['Lang'] == 'en') papers_meta.count() #Drop the unneeded columns papers_meta = papers_meta.drop('publish_time', 'Lang') papers_meta.show(5) ###Output +--------------------+--------------------+--------------------+--------------------+--------------------+------------+-------------+ | Paper_ID| title| body| authors| journal|publish_Year|publish_Month| +--------------------+--------------------+--------------------+--------------------+--------------------+------------+-------------+ |0782b4fb23ca65baf...|The population ge...| The model will b...|Tibayrenc, Michel...| Acta Tropica| 2015| 11| |14d8fd027f39c8311...|Drug-Induced Dela...| Drug-induced del...|Klimas, Natasha; ...|Cutaneous Drug Er...| 2015| 5| |1e5228e3f0658479a...|Canonicalizing Kn...| User-generated c...|Fatma, Nausheen; ...|Advances in Knowl...| 2020| 4| |1e663ac169e08ea02...|The effects of a ...| Evidence has bee...|Yip, Paul S.F.; F...|Journal of Affect...| 2006| 7| |1f26b5e8291ea1ddc...|The Case for Labo...| The field of pat...|Kaul, Karen L.; S...| Acad Pathol| 2017| 7| +--------------------+--------------------+--------------------+--------------------+--------------------+------------+-------------+ only showing top 5 rows ###Markdown We will periodically save the dataframes we are working on in parquet format with 12 partitions (the number of logical cores in our machine) for performance improvements. ###Code ## Save what we have so far papers_meta.repartition(12).write.parquet("./Dataset/papers_meta") #papers_meta = spark.read\ # .format("parquet")\ # .option("header", "true")\ # .load("./Dataset/papers_meta") ###Output _____no_output_____ ###Markdown Find which columns contain null or unknown values and replace them with " " for categorical and 0 for numerical Count Nulls and Empty values in each column to understand more about the data ###Code from pyspark.sql.functions import isnan, when, count, col print("Count of Null") papers_meta.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in papers_meta.columns]).show() print("Count of Empty values") papers_meta.select([count(when(col(c) == "", c)).alias(c) for c in papers_meta.columns]).show() cat_cols = [item[0] for item in papers_meta.dtypes if item[1].startswith('string')] cat_cols from pyspark.sql.functions import col cat_null_cols = [column for column in cat_cols if papers_meta.where(col(column).isNull()| col(column).isin('')).count() > 0] cat_null_cols ### Now let's fill with " " for column in cat_null_cols: papers_meta = papers_meta.na.fill(" ") num_cols = [item[0] for item in papers_meta.dtypes if item[1].startswith('int') | item[1].startswith('double')] num_cols ### Now let's find numerical columns with null values num_null_cols = [column for column in num_cols if papers_meta.filter(col(column).isNull() | col(column).eqNullSafe(0)).count() > 0] num_null_cols ### Now let's fill with 0 for column in num_null_cols: papers_meta = papers_meta.na.fill(0) papers_meta.show(10) ###Output +--------------------+--------------------+--------------------+--------------------+--------------------+------------+-------------+ | Paper_ID| title| body| authors| journal|publish_Year|publish_Month| +--------------------+--------------------+--------------------+--------------------+--------------------+------------+-------------+ |012b16ae54779ca1a...|Synchronized Bive...| Miniaturized imp...|Lyu, Hongming; Jo...| Sci Rep| 2020| 2| |01bc7fe59fc7feb0e...|Open Forum Infect...| Acute upper resp...|Joseph, Patrick; ...|Open Forum Infect...| 2018| 2| |0240a12c9fdf6c031...|Supersize me: how...| In epidemiology,...|Kao, Rowland R.; ...|Trends in Microbi...| 2014| 5| |03bfb747583f6b214...|The Infant Nasoph...| The human microb...| | | 0| 0| |06e9041ff3cb1db28...|Selected Nonvacci...| A cute respirato...|Lee, Terrence; Jo...|American Journal ...| 2005| 4| |0760e79585cd85c7e...| | Infectious disea...| Froes, F.| Pulmonology| 2020| 4| |0adf2e22eefb4ea5e...|Aberrant coagulat...| Influenza A viru...|Yang, Yan; Tang, ...|Cellular & Molecu...| 2016| 4| |0b137fde2327ae466...|Tips and Tricks f...| Manufacturing of...|Viganò, Mariele; ...| Stem Cells Int| 2018| 9| |0c95f3083af8f4daf...|Automated TruTip ...| Nucleic acid tec...|Thakore, Nitu; No...| PLoS One| 2018| 7| |0ef55241b6127c6bc...|Putative papain-r...| Polyprotein proc...| | | 0| 0| +--------------------+--------------------+--------------------+--------------------+--------------------+------------+-------------+ only showing top 10 rows ###Markdown Now drop duplicates rows (No duplicates found in our sample) ###Code papers_meta.count() papers_meta.dropDuplicates() papers_meta.count() ## Save what we have so far papers_meta.repartition(12).write.parquet("./Dataset/papers_meta_cleaned") papers_meta = spark.read\ .format("parquet")\ .option("header", "true")\ .load("./Dataset/papers_meta_cleaned") ###Output _____no_output_____ ###Markdown Step 4 (Preprocessing) Now let's start the Processing phase for the papers body and title ###Code from pyspark.sql.functions import lower, regexp_replace from pyspark.ml.feature import Tokenizer, StopWordsRemover ###Output _____no_output_____ ###Markdown Convert the body text to lower case Remove Punctuation ###Code from pyspark.sql.functions import col papers_meta = papers_meta.withColumn("body", lower(col('body'))) papers_meta = papers_meta.withColumn("body", regexp_replace("body", "[^a-zA-Z\\s]" , " ")) papers_meta = papers_meta.withColumn("body", regexp_replace("body", " +" , " ")) papers_meta = papers_meta.withColumn("body", regexp_replace("body", "^ +" , "")) papers_meta = papers_meta.withColumn("title", lower(col('title'))) papers_meta = papers_meta.withColumn("title", regexp_replace("title", "[^a-zA-Z\\s]" , " ")) papers_meta = papers_meta.withColumn("title", regexp_replace("title", " +" , " ")) papers_meta = papers_meta.withColumn("title", regexp_replace("title", "^ +" , "")) papers_meta.show(10) ###Output +--------------------+--------------------+--------------------+--------------------+------------+------------+-------------+ | Paper_ID| title| body| authors| journal|publish_Year|publish_Month| +--------------------+--------------------+--------------------+--------------------+------------+------------+-------------+ |002f213aeda7ce843...|etiology and impa...|community acquire...|Nolan, Vikki G; A...|J Infect Dis| 2018| 7| |0297dd12949520da3...|optimization of p...|the localization ...|Quintana, C.; Mar...| Micron| 1998| 8| |09045f5964f24691f...|evaluation of tnf...|the toxic effects...|Rook, Graham A. W...| Biotherapy| 1991| 1| |0e3da58a0d46d88ee...|the influence of ...|increasing our un...| | | 0| 0| |103c89c60d7d24bb8...|does pathogen spi...|pathogen outbreak...|Otterstatter, Mic...| PLoS One| 2008| 7| |12d9952ba3cf8410e...|immunoglobulin he...|immunoglobulin ge...| | | 0| 0| |18f991e0b4dc59943...|center for medica...|the novel coronav...|Huang, Ying hui; ...| | 2020| 4| |192b2c4501ca09903...|generalized latti...|several graph rep...|González-Díaz, H....|J Theor Biol| 2009| 11| |1a5c7512b0e842a7b...|sars coronavirus ...|severe acute resp...|Varshney, Bhavna;...| PLoS One| 2012| 1| |1ba8aa57522bdeb6b...|genetic cellular ...|the year occasion...| Nabel, Gary J| Nat Med| 2004| 1| +--------------------+--------------------+--------------------+--------------------+------------+------------+-------------+ only showing top 10 rows ###Markdown Tokenize the paper body text and title text ###Code tokenizer = Tokenizer(inputCol='body', outputCol='words_token') papers_meta = tokenizer.transform(papers_meta).select('*') tokenizer = Tokenizer(inputCol='title', outputCol='title_token') papers_meta = tokenizer.transform(papers_meta).select('*') papers_meta.select('words_token', 'title_token').show(10) ###Output +--------------------+--------------------+ | words_token| title_token| +--------------------+--------------------+ |[community, acqui...|[etiology, and, i...| |[the, localizatio...|[optimization, of...| |[the, toxic, effe...|[evaluation, of, ...| |[increasing, our,...|[the, influence, ...| |[pathogen, outbre...|[does, pathogen, ...| |[immunoglobulin, ...|[immunoglobulin, ...| |[the, novel, coro...|[center, for, med...| |[several, graph, ...|[generalized, lat...| |[severe, acute, r...|[sars, coronaviru...| |[the, year, occas...|[genetic, cellula...| +--------------------+--------------------+ only showing top 10 rows ###Markdown Remove stop words ###Code remover = StopWordsRemover(inputCol='words_token', outputCol='words_clean') papers_meta = remover.transform(papers_meta).select('*') remover = StopWordsRemover(inputCol='title_token', outputCol='title_clean') papers_meta = remover.transform(papers_meta).select('*') papers_meta.select('words_clean', 'title_clean').show(10) ###Output +--------------------+--------------------+ | words_clean| title_clean| +--------------------+--------------------+ |[community, acqui...|[etiology, impact...| |[localization, ch...|[optimization, ph...| |[toxic, effects, ...|[evaluation, tnf,...| |[increasing, unde...|[influence, clima...| |[pathogen, outbre...|[pathogen, spillo...| |[immunoglobulin, ...|[immunoglobulin, ...| |[novel, coronavir...|[center, medical,...| |[several, graph, ...|[generalized, lat...| |[severe, acute, r...|[sars, coronaviru...| |[year, occasioned...|[genetic, cellula...| +--------------------+--------------------+ only showing top 10 rows ###Markdown Now we need to remove the custom stopwords ###Code remover2 = StopWordsRemover(inputCol='words_clean', outputCol='words_clean_custom', stopWords = ['doi', 'preprint', 'copyright', 'peer', 'reviewed', 'org', 'https', 'et', 'al', 'author', 'figure','rights', 'reserved', 'permission', 'used', 'using', 'biorxiv', 'medrxiv', 'license', 'fig', 'fig.', 'al.', 'Elsevier', 'PMC', 'CZI', 'www']) papers_meta = remover2.transform(papers_meta).select('*') remover2 = StopWordsRemover(inputCol='title_clean', outputCol='title_clean_custom', stopWords = ['doi', 'preprint', 'copyright', 'peer', 'reviewed', 'org', 'https', 'et', 'al', 'author', 'figure','rights', 'reserved', 'permission', 'used', 'using', 'biorxiv', 'medrxiv', 'license', 'fig', 'fig.', 'al.', 'Elsevier', 'PMC', 'CZI', 'www']) papers_meta = remover2.transform(papers_meta).select('*') papers_meta.select('words_clean_custom', 'title_clean_custom').show(10) from pyspark.sql.functions import size papers_meta = papers_meta.withColumn("wordcount", size("words_clean_custom")) papers_meta.printSchema() ###Output root |-- Paper_ID: string (nullable = true) |-- title: string (nullable = true) |-- body: string (nullable = true) |-- authors: string (nullable = true) |-- journal: string (nullable = true) |-- publish_Year: integer (nullable = true) |-- publish_Month: integer (nullable = true) |-- words_token: array (nullable = true) | |-- element: string (containsNull = true) |-- title_token: array (nullable = true) | |-- element: string (containsNull = true) |-- words_clean: array (nullable = true) | |-- element: string (containsNull = true) |-- title_clean: array (nullable = true) | |-- element: string (containsNull = true) |-- words_clean_custom: array (nullable = true) | |-- element: string (containsNull = true) |-- title_clean_custom: array (nullable = true) | |-- element: string (containsNull = true) |-- wordcount: integer (nullable = false) ###Markdown Drop the unneeded columns ###Code #papers_meta = papers_meta.drop('title', 'body', 'words_token', 'words_clean', 'title_token', 'title_clean') papers_meta = papers_meta.drop('body', 'words_token', 'words_clean', 'title_token') papers_meta.show() papers_meta.repartition(12).write.parquet("./papers_meta_processed") #papers_meta = spark.read\ # .format("parquet")\ # .option("header", "true")\ # .load("./Dataset/papers_meta_processed") ###Output _____no_output_____ ###Markdown Step 5 (Vectorization and prepare the features column) Now Apply Word2Vec on the processed body text "words_clean_custom" and the title text "title_clean_custom" ###Code from pyspark.ml.feature import Word2Vec # Learn a mapping from words to Vectors. word2Vec = Word2Vec(vectorSize=100, minCount=0, inputCol="words_clean_custom", outputCol="word2vec_body") model = word2Vec.fit(papers_meta) papers_meta = model.transform(papers_meta) word2Vec = Word2Vec(vectorSize=100, minCount=0, inputCol="title_clean_custom", outputCol="word2vec_title") model = word2Vec.fit(papers_meta) papers_meta = model.transform(papers_meta) papers_meta.show(10) papers_meta.repartition(12).write.parquet("./Dataset/papers_meta_word2vec") papers_meta = spark.read\ .format("parquet")\ .option("header", "true")\ .load("./Dataset/papers_meta_word2vec") papers_meta.printSchema() #Select only the needed columns papers_meta = papers_meta.select('Paper_ID', 'authors', 'journal', 'wordcount', 'publish_Year', 'publish_Month', 'word2vec_body', 'word2vec_title') papers_meta.show(5) ###Output +--------------------+--------------------+--------------------+---------+------------+-------------+--------------------+--------------------+ | Paper_ID| authors| journal|wordcount|publish_Year|publish_Month| word2vec_body| word2vec_title| +--------------------+--------------------+--------------------+---------+------------+-------------+--------------------+--------------------+ |012b16ae54779ca1a...|Lyu, Hongming; Jo...| Sci Rep| 1698| 2020| 2|[0.01163073299174...|[-0.0075389318427...| |01bc7fe59fc7feb0e...|Joseph, Patrick; ...|Open Forum Infect...| 2106| 2018| 2|[0.08144156184301...|[-0.0090094477359...| |0240a12c9fdf6c031...|Kao, Rowland R.; ...|Trends in Microbi...| 3808| 2014| 5|[0.04915223334121...|[-0.0112262216571...| |03bfb747583f6b214...| | | 3848| 0| 0|[0.03949146437089...|[0.00115656061097...| |06e9041ff3cb1db28...|Lee, Terrence; Jo...|American Journal ...| 2619| 2005| 4|[0.09074272874731...|[-0.0065258390604...| +--------------------+--------------------+--------------------+---------+------------+-------------+--------------------+--------------------+ only showing top 5 rows ###Markdown Now let's prepare the features vector First, Define StringIndexers for categorical columns ###Code from pyspark.ml.feature import StringIndexer from pyspark.ml.feature import OneHotEncoder cat_cols = [item[0] for item in papers_meta.dtypes if item[1].startswith('string')][1:] cat_cols indexers = [StringIndexer( inputCol=column, outputCol=column + '_index', handleInvalid='keep') for column in cat_cols] encoders = [OneHotEncoder( inputCol=column + '_index', outputCol= column + '_encoded') for column in cat_cols] from pyspark.ml import Pipeline pipeline = Pipeline(stages=indexers + encoders) papers_meta_transformed = pipeline.fit(papers_meta).transform(papers_meta) papers_meta_transformed.show(10) ###Output +--------------------+--------------------+--------------------+---------+------------+-------------+--------------------+--------------------+-------------+-------------+-------------------+-------------------+ | Paper_ID| authors| journal|wordcount|publish_Year|publish_Month| word2vec_body| word2vec_title|authors_index|journal_index| authors_encoded| journal_encoded| +--------------------+--------------------+--------------------+---------+------------+-------------+--------------------+--------------------+-------------+-------------+-------------------+-------------------+ |012b16ae54779ca1a...|Lyu, Hongming; Jo...| Sci Rep| 1698| 2020| 2|[0.01163073299174...|[-0.0075389318427...| 7681.0| 7.0|(8321,[7681],[1.0])| (2750,[7],[1.0])| |01bc7fe59fc7feb0e...|Joseph, Patrick; ...|Open Forum Infect...| 2106| 2018| 2|[0.08144156184301...|[-0.0090094477359...| 1436.0| 92.0|(8321,[1436],[1.0])| (2750,[92],[1.0])| |0240a12c9fdf6c031...|Kao, Rowland R.; ...|Trends in Microbi...| 3808| 2014| 5|[0.04915223334121...|[-0.0112262216571...| 3535.0| 704.0|(8321,[3535],[1.0])| (2750,[704],[1.0])| |03bfb747583f6b214...| | | 3848| 0| 0|[0.03949146437089...|[0.00115656061097...| 0.0| 0.0| (8321,[0],[1.0])| (2750,[0],[1.0])| |06e9041ff3cb1db28...|Lee, Terrence; Jo...|American Journal ...| 2619| 2005| 4|[0.09074272874731...|[-0.0065258390604...| 3552.0| 357.0|(8321,[3552],[1.0])| (2750,[357],[1.0])| |0760e79585cd85c7e...| Froes, F.| Pulmonology| 477| 2020| 4|[0.07848690268100...|[0.00330294785089...| 7879.0| 1169.0|(8321,[7879],[1.0])|(2750,[1169],[1.0])| |0adf2e22eefb4ea5e...|Yang, Yan; Tang, ...|Cellular & Molecu...| 3184| 2016| 4|[-0.0165081877109...|[-0.0174122095664...| 8314.0| 398.0|(8321,[8314],[1.0])| (2750,[398],[1.0])| |0b137fde2327ae466...|Viganò, Mariele; ...| Stem Cells Int| 4208| 2018| 9|[0.05817865815495...|[-0.0057283864339...| 6888.0| 2276.0|(8321,[6888],[1.0])|(2750,[2276],[1.0])| |0c95f3083af8f4daf...|Thakore, Nitu; No...| PLoS One| 1980| 2018| 7|[0.05883210357652...|[0.00249506733962...| 704.0| 1.0| (8321,[704],[1.0])| (2750,[1],[1.0])| |0ef55241b6127c6bc...| | | 941| 0| 0|[0.06429963556128...|[-0.0121964396426...| 0.0| 0.0| (8321,[0],[1.0])| (2750,[0],[1.0])| +--------------------+--------------------+--------------------+---------+------------+-------------+--------------------+--------------------+-------------+-------------+-------------------+-------------------+ only showing top 10 rows ###Markdown Now select the required features and apply the vector assembler ###Code requiredFeatures = [ 'wordcount', 'publish_Year', 'publish_Month', 'word2vec_body', 'word2vec_title', 'authors_encoded', 'journal_encoded' ] from pyspark.ml.feature import VectorAssembler assembler = VectorAssembler(inputCols=requiredFeatures, outputCol='features') papers_meta_transformed = assembler.transform(papers_meta_transformed) papers_meta_transformed.show(10) papers_meta_transformed.select('features').head(1) papers_meta_transformed.repartition(12,'features').write.parquet("./Dataset/papersmeta_transformed") papers_meta_transformed.show(5) ###Output +--------------------+--------------------+--------------------+---------+------------+-------------+--------------------+--------------------+-------------+-------------+-------------------+-------------------+--------------------+ | Paper_ID| authors| journal|wordcount|publish_Year|publish_Month| word2vec_body| word2vec_title|authors_index|journal_index| authors_encoded| journal_encoded| features| +--------------------+--------------------+--------------------+---------+------------+-------------+--------------------+--------------------+-------------+-------------+-------------------+-------------------+--------------------+ |0d3951ca998dcf8af...|Veir, Julia K.; L...|Veterinary Clinic...| 2490| 2010| 11|[0.06525062580420...|[0.00330294785089...| 1449.0| 113.0|(8321,[1449],[1.0])| (2750,[113],[1.0])|(11274,[0,1,2,3,4...| |10218d6165dcbaa34...|Engström, Patrik;...| Nat Microbiol| 5389| 2019| 10|[0.12919333469184...|[-0.0156040839663...| 6916.0| 479.0|(8321,[6916],[1.0])| (2750,[479],[1.0])|(11274,[0,1,2,3,4...| |1b59e2160f61456f3...|Parashar, Bhupesh...| Cureus| 2854| 2020| 5|[0.08179591124634...|[0.02742541182841...| 6954.0| 120.0|(8321,[6954],[1.0])| (2750,[120],[1.0])|(11274,[0,1,2,3,4...| |204e90c2972ccfb55...|Rong, Q.; Alexand...| Arch Virol| 2549| 2003| 1|[0.12304916116595...|[-0.0329982108669...| 630.0| 3.0| (8321,[630],[1.0])| (2750,[3],[1.0])|(11274,[0,1,2,3,4...| |20bb0f949ada4e02d...|Musselwhite, Char...| J Transp Health| 1233| 2020| 4|[0.06484576156660...|[0.00330294785089...| 5787.0| 2614.0|(8321,[5787],[1.0])|(2750,[2614],[1.0])|(11274,[0,1,2,3,4...| +--------------------+--------------------+--------------------+---------+------------+-------------+--------------------+--------------------+-------------+-------------+-------------------+-------------------+--------------------+ only showing top 5 rows ###Markdown Step 6 (PCA and Clustering) Apply the PCA ###Code papers_meta_transformed = spark.read\ .format("parquet")\ .option("header", "true")\ .load("./Dataset/papersmeta_transformed") from pyspark.ml.feature import PCA from pyspark.ml.linalg import Vectors pca = PCA(k=2, inputCol="features", outputCol="features_pca") model = pca.fit(papers_meta_transformed) papers_meta_transformed = model.transform(papers_meta_transformed) model.explainedVariance papers_meta_transformed.show(10) papers_meta_transformed.repartition(12).write.parquet("./Dataset/papersmeta_transformed_pca") ###Output _____no_output_____ ###Markdown Define the clustering model Choose number of clusters k (based on Elbow method and silhouette_score) ###Code papers_meta_transformed = spark.read\ .format("parquet")\ .option("header", "true")\ .load("./Dataset/papersmeta_transformed_pca") from pyspark.ml.clustering import KMeans from pyspark.ml.evaluation import ClusteringEvaluator # Calculate cost and plot import numpy as np import pandas as pd cost = np.zeros(15) silhouette = np.zeros(15) for k in range(2,15): kmeans = KMeans().setK(k).setSeed(1).setFeaturesCol('features_pca') model = kmeans.fit(papers_meta_transformed) cost[k] = model.computeCost(papers_meta_transformed) clusterdData = model.transform(papers_meta_transformed) evaluator = ClusteringEvaluator() silhouette[k] = evaluator.evaluate(clusterdData) # Plot the cost df_eval = pd.DataFrame(np.array([cost[2:].tolist(),silhouette[2:].tolist()])).transpose() df_eval.columns = ["cost", "silhouette_score"] new_col = [2,3,4,5,6,7,8,9,10,11,12,13,14] df_eval.insert(0, 'cluster', new_col) import seaborn as sns import matplotlib.pyplot as plt sns.set_style("whitegrid") fig = plt.figure(figsize=(40,10)) ax1 = fig.add_subplot(1, 2, 1) ax1.set(xticks=range(2,15)) ax2 = fig.add_subplot(1, 2, 2) ax2.set(xticks=range(2,15)) sns.lineplot(x='cluster', y='cost', data=df_eval, ax=ax1) sns.lineplot(x='cluster', y='silhouette_score', data=df_eval, ax=ax2) df_eval ###Output _____no_output_____ ###Markdown We choose k=4 based on the plots above ###Code kmeans = KMeans().setK(4).setFeaturesCol('features_pca') model = kmeans.fit(papers_meta_transformed) clusterdData = model.transform(papers_meta_transformed) ###Output _____no_output_____ ###Markdown Step 7 Recommender System The goal here is to build a basic recommender the system that reccomends similar papers to a given title. When provided with a paper title, the recommender is only going to consider papers which belong to the same cluster. it will then run cosine similiraty between the given title and the processed paper titles in the database. it will return a dictionary with suggested paper titles and the cosine dot product with respect to the given title ###Code paper_titles_df = spark.read\ .format("parquet")\ .option("header", "true")\ .load("./Dataset/papers_meta_word2vec").select('Paper_ID', 'title', 'title_clean') clusterdData = paper_titles_df.join(clusterdData, on=['Paper_ID'], how='inner') clusterdData.show(5) ###Output +--------------------+--------------------+--------------------+--------------------+--------------------+---------+------------+-------------+--------------------+--------------------+-------------+-------------+-------------------+-------------------+--------------------+--------------------+----------+ | Paper_ID| title| title_clean| authors| journal|wordcount|publish_Year|publish_Month| word2vec_body| word2vec_title|authors_index|journal_index| authors_encoded| journal_encoded| features| features_pca|prediction| +--------------------+--------------------+--------------------+--------------------+--------------------+---------+------------+-------------+--------------------+--------------------+-------------+-------------+-------------------+-------------------+--------------------+--------------------+----------+ |0782b4fb23ca65baf...|the population ge...|[population, gene...|Tibayrenc, Michel...| Acta Tropica| 3686| 2015| 11|[0.02713752761508...|[0.01237550669466...| 4017.0| 2721.0|(8321,[4017],[1.0])|(2750,[2721],[1.0])|(11274,[0,1,2,3,4...|[-3692.4819062727...| 0| |14d8fd027f39c8311...|drug induced dela...|[drug, induced, d...|Klimas, Natasha; ...|Cutaneous Drug Er...| 2594| 2015| 5|[0.04067398937230...|[-0.0022071010898...| 6489.0| 1331.0|(8321,[6489],[1.0])|(2750,[1331],[1.0])|(11274,[0,1,2,3,4...|[-2600.4873834375...| 0| |1e5228e3f0658479a...|canonicalizing kn...|[canonicalizing, ...|Fatma, Nausheen; ...|Advances in Knowl...| 2101| 2020| 4|[0.06466745659127...|[0.00304986406117...| 4030.0| 40.0|(8321,[4030],[1.0])| (2750,[40],[1.0])|(11274,[0,1,2,3,4...|[-2107.5060457725...| 0| |1e663ac169e08ea02...|the effects of a ...|[effects, celebri...|Yip, Paul S.F.; F...|Journal of Affect...| 1177| 2006| 7|[0.07743242526835...|[0.00422781358273...| 1147.0| 1477.0|(8321,[1147],[1.0])|(2750,[1477],[1.0])|(11274,[0,1,2,3,4...|[-1183.4657902211...| 0| |1f26b5e8291ea1ddc...|the case for labo...|[case, laboratory...|Kaul, Karen L.; S...| Acad Pathol| 7361| 2017| 7|[0.03405737735745...|[0.03623301810067...| 8216.0| 1072.0|(8321,[8216],[1.0])|(2750,[1072],[1.0])|(11274,[0,1,2,3,4...|[-7367.4690924711...| 3| +--------------------+--------------------+--------------------+--------------------+--------------------+---------+------------+-------------+--------------------+--------------------+-------------+-------------+-------------------+-------------------+--------------------+--------------------+----------+ only showing top 5 rows ###Markdown We will pre-calculate tf and idf values and store them in a dataframe named data for our recommender system. ###Code from pyspark.sql.functions import udf, col df_recommender = clusterdData.select('title', 'title_clean', col('prediction').alias('cluster')) from pyspark.ml.feature import HashingTF, IDF hashingTF = HashingTF(inputCol="title_clean", outputCol="tf") tf = hashingTF.transform(df_recommender) idf = IDF(inputCol="tf", outputCol="idf_feature").fit(tf) tfidf = idf.transform(tf) from pyspark.ml.feature import Normalizer normalizer = Normalizer(inputCol="idf_feature", outputCol="norm") data = normalizer.transform(tfidf) ###Output _____no_output_____ ###Markdown We will define the recommendPaper function. it takes a paper title and returns N recommendations from the same cluster. ###Code from pyspark.sql.types import DoubleType dot_udf = udf(lambda x,y: float(x.dot(y)), DoubleType()) def recommendPaper(paper_title,N, data=data): target_paper = data.filter(data['title'] == paper_title) input_cluster = target_paper.select('cluster').collect()[0].cluster data = data.filter(data['cluster'] == input_cluster) recommendations = target_paper.alias("tearget_paper").crossJoin(data.alias("right"))\ .select(col("tearget_paper.title").alias("target_title"), col("right.title").alias("recommended_title"), dot_udf("tearget_paper.norm", "right.norm").alias("dot"))\ .sort(col("dot").desc())\ .limit(N+1) return {reccomendation.recommended_title:reccomendation.dot for reccomendation in recommendations.collect()[1:]} recommendPaper('the population genetics of trypanosoma cruzi revisited in the light of the predominant clonal evolution model',4) recommendPaper('annual update in intensive care and emergency medicine',2) recommendPaper('use of simple laboratory features to distinguish the early stage of severe acute respiratory syndrome from dengue fever',3) recommendPaper('virology of hepatitis c virus',10) ###Output _____no_output_____
examples/6_train_classifier.ipynb
###Markdown Train a ClassifierIn this notebook we train a Gradient Boosting Decision Tree (GBDT) classifier using the implementation of the package [LightGBM](https://lightgbm.readthedocs.io/en/latest/). Index1. [Import Packages](imports)2. [Load Features](loadFeatures)3. [Generate Classifier](generateClassifier) 1. [Untrained Classifier](createClassifier) 2. [Train Classifier](trainClassifier) 3. [Save the Classifier Instance](saveClassifier) 1. Import Packages ###Code import os import pickle import sys import time import lightgbm as lgb import matplotlib.pyplot as plt import numpy as np import pandas as pd import scipy.stats from snmachine import snclassifier from utils.plasticc_pipeline import get_directories, load_dataset import warnings warnings.simplefilter('always', DeprecationWarning) %config Completer.use_jedi = False # enable autocomplete ###Output _____no_output_____ ###Markdown 2. Load FeaturesFirst, **write** the path to the folder that contains the features and the labels of the events (`path_saved_features`). These quantities were calculated and saved in [5_feature_extraction](5_feature_extraction.ipynb). 2.1. Features Path**A)** Obtain path from folder structure.If you created a folder structure, you can obtain the path from there. **Write** the name of the folder in `analysis_name`. ###Code analysis_name = 'example_dataset_aug' folder_path = '../snmachine/example_data' directories = get_directories(folder_path, analysis_name) path_saved_features = directories['features_directory'] ###Output _____no_output_____ ###Markdown **B)** Directly **write** where you saved the files. ```pythonfolder_path = '../snmachine/example_data'path_saved_features = folder_path``` 2.2. LoadThen, load the features and labels. ###Code X = pd.read_pickle(os.path.join(path_saved_features, 'features.pckl')) # features y = pd.read_pickle(os.path.join(path_saved_features, 'data_labels.pckl')) # class label of each event ###Output _____no_output_____ ###Markdown **A)** If the dataset is not augmented, skip **B)**.**B)** If the dataset is augmented, load the augmented dataset.In order to avoid information leaks during the classifier optimization, all synthetic events generated by the training set augmentation which derived from the same original event must be placed in the same cross-validation fold. First, **write** in `data_file_name` the name of the file where your dataset is saved.In this notebook we use the dataset saved in [4_augment_data](4_augment_data.ipynb). ###Code data_file_name = 'example_dataset_aug.pckl' ###Output _____no_output_____ ###Markdown Then, load the augmented dataset. ###Code data_path = os.path.join(folder_path, data_file_name) dataset = load_dataset(data_path) metadata = dataset.metadata ###Output _____no_output_____ ###Markdown 3. Generate Classifier 3.1. Untrained ClassifierStart by creating a classifier. For that **choose**: - classifier type: `snmachine` contains the following classifiers * [LightGBM](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html?highlight=classifier) classifier - `snclassifier.LightGBMClassifier` * Boosted decision trees - `snclassifier.BoostDTClassifier` * Boosted random forests - `snclassifier.BoostRFClassifier` * K-nearest neighbors vote - `snclassifier.KNNClassifier` * Support vector machine - `snclassifier.SVMClassifier` * Multi-layer Perceptron classifier of a Neural Network - `snclassifier.NNClassifier` * Random forest - `snclassifier.RFClassifier` * Decision tree - `snclassifier.DTClassifier` * Gaussian Naive Bayes - `snclassifier.NBClassifier`- `random_seed`: this allows reproducible results (**optional**).- `classifier_name`: name under which the classifier is saved (**optional**).- `**kwargs`: optional keywords to pass arguments into the underlying classifier; see the docstring in each classifier for more information (**optional**).Here we chose a LightGBM classifier. ###Code classifier_instance = snclassifier.LightGBMClassifier(classifier_name='our_classifier', random_seed=42) ###Output Created classifier of type: LGBMClassifier(random_state=42). ###Markdown 3.2. Train ClassifierWe can now train and use the classifier generated above or optimise it beforehand. In general, it is important to optimise the classifier hyperparameters.If you do not want to optimise the classifier, **run** **A)**.**A)** Train unoptimised classifier. ```pythonclassifier.fit(X, y)``` If you want to optimise the classifier, run **B)**.**B)** Optimise and train classifier.For that, **choose**:- `param_grid`: parameter grid containing the hyperparameters names and lists of their possible settings as values. If none is provided, the code uses a default parameter grid. (**optional**)- `scoring`: metric used to evaluate the predictions on the validation sets and write it in `scoring`. * `snmachine` contains the `'auc'` and the PLAsTiCC `'logloss'` costum metrics. For more details about these, see `snclassifier.logloss_score` and `snclassifier.auc_score`, respectively. * Additionally, you can choose a different metric from the list in [Scikit-learn](https://scikit-learn.org/stable/modules/model_evaluation.htmlscoring-parameter) or create your own (see [`sklearn.model_selection._search.GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) for details).- `number_cv_folds`: number of folds for cross-validation. By default it is 5. (**optional**)- `metadata`: metadata of the events with which to train the classifier. This ensures all synthetic events generated by the training set augmentation that were derived from the same original event are placed in the same cross-validation fold. (**optional**) ###Code param_grid={'learning_rate': [.1, .25, .5]} classifier_instance.optimise(X, y, param_grid=param_grid, scoring='logloss', number_cv_folds=5, metadata=metadata) ###Output Cross-validation for an augmented dataset. The optimisation takes 0.892s. ###Markdown The classifier is optimised and its optimised hyperparameters are: ###Code classifier_instance.classifier classifier_instance.grid_search.best_params_ classifier_instance.classifier_name ###Output _____no_output_____ ###Markdown 3.3. Save the Classifier Instance**Write** in `path_saved_classifier` the path to the folder where to save the trained classifier instance. ###Code path_saved_classifier = directories['classifications_directory'] ###Output _____no_output_____ ###Markdown Save the classifier instance (which includes the grid search used to optimise the classifier). ###Code classifier_instance.save_classifier(path_saved_classifier) ###Output Classifier saved in ../snmachine/example_data/example_dataset_aug/classifications/our_classifier.pck .
notebook/mlops2.ipynb
###Markdown Safe MLOps Deployment Pipeline OverviewIn this notebook you will step through an MLOps pipeline to build, train, deploy and monitor an XGBoost regression model for predicting the expected taxi fare using the New York City Taxi [dataset](https://registry.opendata.aws/nyc-tlc-trip-records-pds/)⇗. This safe pipeline features a [canary deployment](https://docs.aws.amazon.com/wellarchitected/latest/machine-learning-lens/canary-deployment.html)⇗ strategy with rollback on error. You will learn how to trigger and monitor the pipeline, inspect the training workflow, use model monitor to set up alerts, and create a canary deployment. Note: This notebook assumes prior familiarity with the basics training ML models on Amazon SageMaker. Data preparation and visualization, although present, will be kept to a minimum. If you are not familiar with the basic concepts and features of SageMaker, we recommend reading the SageMaker documentation⇗ and completing the workshops and samples in AWS SageMaker Examples GitHub⇗ and AWS Samples GitHub⇗. ContentsThis notebook has the following key sections:1. [Data Prep](Data-Prep)2. [Build](Build)3. [Train Model](Train-Model)4. [Deploy Dev](Deploy-Dev)5. [Deploy Prod](Deploy-Prod)6. [Monitor](Monitor)6. [Cleanup](Cleanup) ArchitectureThe architecture diagram below shows the entire MLOps pipeline at a high level.Use the CloudFormation template provided in this repository (`pipeline.yml`) to build the demo in your own AWS account. If you are currently viewing this notebook from SageMaker in your AWS account, then you have already completed this step. CloudFormation deploys several resources: 1. A customer-managed encryption key in in Amazon KMS for encrypting data and artifacts.1. A secret in Amazon Secrets Manager to securely store your GitHub Access Token.1. Several AWS IAM roles so CloudFormation, SageMaker, and other AWS services can perform actions in your AWS account, following the principle of [least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.htmlgrant-least-privilege)⇗.1. A messaging service in Amazon SNS to notify you when CodeDeploy has successfully deployed the API, and to receive alerts for retraining and drift detection (signing up for these notifications is optional).1. Two Amazon CloudWatch event rules: one which schedules the pipeline to run every month, and one which triggers the pipeline to run when SageMaker Model Monitor detects certain metrics.1. An Amazon SageMaker Jupyter notebook with this workshop content pre-loaded.1. An Amazon S3 bucket for storing model artifacts.1. An AWS CodePipeline instance with several pre-defined stages. Take a moment to look at all of these resources now deployed in your account. ![MLOps pipeline architecture](../docs/mlops-architecture.png)In this notebook, you will work through the CodePipeline instance created by the CloudFormation template. It has several stages:1. **Source** - The pipeline is already configured with two sources. If you upload a new dataset to a specific location in the S3 data bucket, this will trigger the pipeline to run. The Git source can be GitHub, or CodeCommit if you don’t supply your access token. If you commit new code to your repository, this will trigger the pipeline to run. 1. **Build** - In this stage, CodeBuild configured by the build specification `model/buildspec.yml` will execute `model/run.py` to generate AWS CloudFormation templates for creating the AWS Step Function (including AWS Lambda custom resources), and deployment templates used in the following stages based on the data sets and hyperparameters specified for this pipeline run. You will take a closer look at these files later in this notebook. 1. **Train** The Step Functions workflow created in the Build stage is run in this stage. The workflow creates a baseline for the model monitor using a SageMaker processing job, and trains an XGBoost model on the taxi ride dataset using a SageMaker training job.1. **Deploy Dev** In this stage, a CloudFormation template created in the build stage (from `assets/deploy-model-dev.yml`) deploys a dev endpoint. This will allow you to run tests on the model and decide if the model is of sufficient quality to deploy into production.1. **Deploy Production** The final stage of the pipeline is the only stage which does not run automatically as soon as the previous stage is complete. It waits for a user to manually approve the model which was previously deployed to dev. As soon as the model is approved, a CloudFormation template (packaged from `assets/deploy-model-prod.yml` to include the Lambda functions saved and uploaded as ZIP files in S3) deploys the production endpoint. It configures autoscaling and enables data capture. It creates a model monitoring schedule and sets CloudWatch alarms for certain metrics. It also sets up an AWS CodeDeploy instance which deploys a set of AWS Lambda functions and an Amazon API Gateway to sit in front of the SageMaker endpoint. This stage can make use of canary deployment to safely switch from an old model to a new model. ###Code # Import the latest sagemaker and boto3 SDKs. import sys !{sys.executable} -m pip install --upgrade pip !{sys.executable} -m pip install -qU awscli boto3 "sagemaker>=2.1.0<3" tqdm !{sys.executable} -m pip install -qU "stepfunctions==2.0.0" !{sys.executable} -m pip show sagemaker stepfunctions ###Output _____no_output_____ ###Markdown Restart your SageMaker kernel then continue with this notebook. Data Prep In this section of the notebook, you will download the publicly available New York Taxi dataset in preparation for uploading it to S3. Download DatasetFirst, download a sample of the New York City Taxi [dataset](https://registry.opendata.aws/nyc-tlc-trip-records-pds/)⇗ to this notebook instance. This dataset contains information on trips taken by taxis and for-hire vehicles in New York City, including pick-up and drop-off times and locations, fares, distance traveled, and more. ###Code !aws s3 cp 's3://exp01-dev-modelexample-rnybworc-data/trip data/green_tripdata_2020-01.csv' 'nyc-tlc.csv' ###Output download: s3://exp01-dev-modelexample-rnybworc-data/trip data/green_tripdata_2020-01.csv to ./nyc-tlc.csv ###Markdown Now load the dataset into a pandas data frame, taking care to parse the dates correctly. ###Code import pandas as pd parse_dates= ['lpep_dropoff_datetime', 'lpep_pickup_datetime'] trip_df = pd.read_csv('nyc-tlc.csv', parse_dates=parse_dates) trip_df.head() ###Output /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/IPython/core/interactiveshell.py:3072: DtypeWarning: Columns (3) have mixed types.Specify dtype option on import or set low_memory=False. interactivity=interactivity, compiler=compiler, result=result) ###Markdown Data manipulationInstead of the raw date and time features for pick-up and drop-off, let's use these features to calculate the total time of the trip in minutes, which will be easier to work with for our model. ###Code trip_df['duration_minutes'] = (trip_df['lpep_dropoff_datetime'] - trip_df['lpep_pickup_datetime']).dt.seconds/60 ###Output _____no_output_____ ###Markdown The dataset contains a lot of columns we don't need, so let's select a sample of columns for our machine learning model. Keep only `total_amount` (fare), `duration_minutes`, `passenger_count`, and `trip_distance`. ###Code cols = ['total_amount', 'duration_minutes', 'passenger_count', 'trip_distance'] data_df = trip_df[cols] print(data_df.shape) data_df.head() ###Output (447770, 4) ###Markdown Generate some quick statistics for the dataset to understand the quality. ###Code data_df.describe() ###Output _____no_output_____ ###Markdown The table above shows some clear outliers, e.g. -400 or 2626 as fare, or 0 passengers. There are many intelligent methods for identifying and removing outliers, but data cleaning is not the focus of this notebook, so just remove the outliers by setting some min and max values which seem more reasonable. Removing the outliers results in a final dataset of 754,671 rows. ###Code data_df = data_df[(data_df.total_amount > 0) & (data_df.total_amount < 200) & (data_df.duration_minutes > 0) & (data_df.duration_minutes < 120) & (data_df.trip_distance > 0) & (data_df.trip_distance < 121) & (data_df.passenger_count > 0)].dropna() print(data_df.shape) ###Output (312891, 4) ###Markdown Data visualizationSince this notebook will build a regression model for the taxi data, it's a good idea to check if there is any correlation between the variables in our data. Use scatter plots on a sample of the data to compare trip distance with duration in minutes, and total amount (fare) with duration in minutes. ###Code import seaborn as sns sample_df = data_df.sample(1000) sns.scatterplot(data=sample_df, x='duration_minutes', y='trip_distance') sns.scatterplot(data=sample_df, x='duration_minutes', y='total_amount') ###Output _____no_output_____ ###Markdown These scatter plots look fine and show at least some correlation between our variables. Data splitting and savingWe are now ready to split the dataset into train, validation, and test sets. ###Code from sklearn.model_selection import train_test_split train_df, val_df = train_test_split(data_df, test_size=0.20, random_state=42) val_df, test_df = train_test_split(val_df, test_size=0.05, random_state=42) # Reset the index for our test dataframe test_df.reset_index(inplace=True, drop=True) print('Size of\n train: {},\n val: {},\n test: {} '.format(train_df.shape[0], val_df.shape[0], test_df.shape[0])) ###Output Size of train: 250312, val: 59450, test: 3129 ###Markdown Save the train, validation, and test files as CSV locally on this notebook instance. Notice that you save the train file twice - once as the training data file and once as the baseline data file. The baseline data file will be used by [SageMaker Model Monitor](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html)⇗ to detect data drift. Data drift occurs when the statistical nature of the data that your model receives while in production drifts away from the nature of the baseline data it was trained on, which means the model begins to lose accuracy in its predictions. ###Code train_cols = ['total_amount', 'duration_minutes','passenger_count','trip_distance'] train_df.to_csv('train.csv', index=False, header=False) val_df.to_csv('validation.csv', index=False, header=False) test_df.to_csv('test.csv', index=False, header=False) # Save test and baseline with headers train_df.to_csv('baseline.csv', index=False, header=True) ###Output _____no_output_____ ###Markdown Now upload these CSV files to your default SageMaker S3 bucket. ###Code import sagemaker # Get the session and default bucket session = sagemaker.session.Session() bucket = session.default_bucket() # Specify data prefix and version prefix = 'nyc-tlc/v1' s3_train_uri = session.upload_data('train.csv', bucket, prefix + '/data/training') s3_val_uri = session.upload_data('validation.csv', bucket, prefix + '/data/validation') s3_test_uri = session.upload_data('test.csv', bucket, prefix + '/data/test') s3_baseline_uri = session.upload_data('baseline.csv', bucket, prefix + '/data/baseline') ###Output _____no_output_____ ###Markdown You will use the datasets which you have prepared and saved in this section to trigger the pipeline to train and deploy a model in the next section. BuildIf you navigate to the CodePipeline instance created for this workshop, you will notice that the Source stage is initially in a `Failed` state. This happens because the dataset, which is one of the sources that can trigger the pipeline, has not yet been uploaded to the S3 location expected by the pipeline.![Failed code pipeline](../docs/pipeline_failed.png) Trigger BuildIn this section, you will start a model build and deployment pipeline by packaging up the datasets you prepared in the previous section and uploading these to the S3 source location which triggers the CodePipeline instance created for this workshop. First, import some libraries and load some environment variables which you will need. These environment variables have been set through a [lifecycle configuration](https://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html)⇗ script attached to this notebook. ###Code !ls -ltr /etc/profile.d import os os.environ["ARTIFACT_BUCKET"] = "exp01-dev-modelexample-artifact" os.environ["PIPELINE_NAME"] = "ModelExample" os.environ["MODEL_NAME"] = "ModelExample" os.environ["WORKFLOW_PIPELINE_ARN"] = "arn:aws:states:eu-west-1:342965497847:stateMachine:ModelExample" os.environ["WORKFLOW_ROLE_ARN"] = "arn:aws:iam::342965497847:role/exp01-dev-modelexample-sfn-execution-role" import boto3 from botocore.exceptions import ClientError import os import time region = boto3.Session().region_name artifact_bucket = os.environ['ARTIFACT_BUCKET'] pipeline_name = os.environ['PIPELINE_NAME'] model_name = os.environ['MODEL_NAME'] workflow_pipeline_arn = os.environ['WORKFLOW_PIPELINE_ARN'] print('region: {}'.format(region)) print('artifact bucket: {}'.format(artifact_bucket)) print('pipeline: {}'.format(pipeline_name)) print('model name: {}'.format(model_name)) print('workflow: {}'.format(workflow_pipeline_arn)) ###Output region: eu-west-1 artifact bucket: exp01-dev-modelexample-rnybworc-artifact pipeline: ModelExample model name: ModelExample workflow: arn:aws:states:eu-west-1:342965497847:stateMachine:ModelExample ###Markdown From the AWS CodePipeline [documentation](https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-simple-s3.html)⇗:> When Amazon S3 is the source provider for your pipeline, you may zip your source file or files into a single .zip and upload the .zip to your source bucket. You may also upload a single unzipped file; however, downstream actions that expect a .zip file will fail.To train a model, you need multiple datasets (train, validation, and test) along with a file specifying the hyperparameters. In this example, you will create one JSON file which contains the S3 dataset locations and one JSON file which contains the hyperparameter values. Then you compress both files into a zip package to be used as input for the pipeline run. ###Code from io import BytesIO import zipfile import json input_data = { 'TrainingUri': s3_train_uri, 'ValidationUri': s3_val_uri, 'TestUri': s3_test_uri, 'BaselineUri': s3_baseline_uri } hyperparameters = { 'num_round': 50 } zip_buffer = BytesIO() with zipfile.ZipFile(zip_buffer, 'a') as zf: zf.writestr('inputData.json', json.dumps(input_data)) zf.writestr('hyperparameters.json', json.dumps(hyperparameters)) zip_buffer.seek(0) data_source_key = '{}/data-source.zip'.format(pipeline_name) print({data_source_key},{artifact_bucket}) ###Output {'ModelExample/data-source.zip'} {'exp01-dev-modelexample-rnybworc-artifact'} ###Markdown Now upload the zip package to your artifact S3 bucket - this action will trigger the pipeline to train and deploy a model. ###Code s3 = boto3.client('s3') s3.put_object(Bucket=artifact_bucket, Key=data_source_key, Body=bytearray(zip_buffer.read())) ###Output _____no_output_____ ###Markdown Click the link below to open the AWS console at the Code Pipeline if you don't have it open in another tab. Tip: You may need to wait a minute to see the DataSource stage turn green. The page will refresh automatically.![Source Green](../docs/datasource-after.png) ###Code from IPython.core.display import HTML HTML('<a target="_blank" href="https://{0}.console.aws.amazon.com/codesuite/codepipeline/pipelines/{1}/view?region={0}">Code Pipeline</a>'.format(region, pipeline_name)) ###Output _____no_output_____ ###Markdown Inspect Build LogsOnce the build stage is running, you will see the AWS CodeBuild job turn blue with a status of **In progress**.![Failed code pipeline](../docs/codebuild-inprogress.png) You can click on the **Details** link displayed in the CodePipeline UI or click the link below to jump directly to the CodeBuild logs. Tip: You may need to wait a few seconds for the pipeline to transition into the active (blue) state and for the build to start. ###Code codepipeline = boto3.client('codepipeline') def get_pipeline_stage(pipeline_name, stage_name): response = codepipeline.get_pipeline_state(name=pipeline_name) for stage in response['stageStates']: if stage['stageName'] == stage_name: return stage # Get last execution id build_stage = get_pipeline_stage(pipeline_name, 'Build') if not 'latestExecution' in build_stage: raise(Exception('Please wait. Build not started')) build_url = build_stage['actionStates'][0]['latestExecution']['externalExecutionUrl'] # Out a link to the code build logs HTML('<a target="_blank" href="{0}">Code Build Logs</a>'.format(build_url)) ###Output _____no_output_____ ###Markdown The AWS CodeBuild process is responsible for creating a number of AWS CloudFormation templates which we will explore in more detail in the next section. Two of these templates are used to set up the **Train** step by creating the AWS Step Functions worklow and the custom AWS Lambda functions used within this workflow. Train Model Inspect Training JobWait until the pipeline has started running the Train step (see screenshot) before continuing with the next cells in this notebook. ![Training in progress](../docs/train-in-progress.png)When the pipeline has started running the train step, you can click on the **Details** link displayed in the CodePipeline UI (see screenshot above) to view the Step Functions workflow which is running the training job. Alternatively, you can click on the Workflow link from the cell output below once it's available. ###Code from stepfunctions.workflow import Workflow while True: try: workflow = Workflow.attach(workflow_pipeline_arn) break except ClientError as e: print(e.response["Error"]["Message"]) time.sleep(10) workflow ###Output _____no_output_____ ###Markdown Or simply run the cell below to display the Step Functions workflow, and re-run it after a few minutes to see the progress. ###Code executions = workflow.list_executions() if not executions: raise(Exception('Please wait. Training not started')) executions[0].render_progress() ###Output _____no_output_____ ###Markdown Review Build ScriptWhile you wait for the training job to complete, let's take a look at the `run.py` code which was used by the AWS CodeBuild process.This script takes all of the input parameters, including the dataset locations and hyperparameters which you saved to JSON files earlier in this notebook, and uses them to generate the templates which the pipeline needs to run the training job. It *does not* create the actual Step Functions instance - it only generates the templates which define the Step Functions workflow, as well as the CloudFormation input templates which CodePipeline uses to instantiate the Step Functions instance.Step-by-step, the script does the following:1. It collects all the input parameters it needs to generate the templates. This includes information about the environment container needed to run the training job, the input and output data locations, IAM roles needed by various components, encryption keys, and more. It then sets up some basic parameters like the AWS region and the function names.1. If the input parameters specify an environment container stored in ECR, it fetches that container. Otherwise, it fetches the URI of the AWS managed environment container needed for the training job.1. It reads the input data JSON file which you generated earlier in this notebook (and which was included in the zip source for the pipeline), thereby fetching the locations of the train, validation, and baseline data files. Then it formats more parameters which will be needed later in the script, including version IDs and output data locations.1. It reads the hyperparameter JSON file which you generated earlier in this notebook.1. It defines the Step Functions workflow, starting with the input schema, followed by each step of the workflow (i.e. Create Experiment, Baseline Job, Training Job), and finally combines those steps into a workflow graph. 1. The workflow graph is saved to file, along with a file containing all of the input parameters saved according to the schema defined in the workflow.1. It saves parameters to file which will be used by CloudFormation to instantiate the Step Functions workflow. ###Code !pygmentize ../model/run.py ###Output _____no_output_____ ###Markdown Customize Workflow (Optional)If you are interested in customising the workflow used in the Build Script, store the `input_data` to be used within the local [workflow.ipynb](workflow.ipynb) notebook. The workflow notebook can be used to experiment with the Step Functions workflow and training job definitions for your model. ###Code %store input_data ###Output _____no_output_____ ###Markdown Training AnalyticsOnce the training and baseline jobs are complete (meaning they are displayed in a green color in the Step Functions workflow, this takes around 5 minutes), you can inspect the experiment metrics. The code below will display all experiments in a table. Note that the baseline processing job won't have RMSE metrics - it calculates metrics based on the training data, but does not train a machine learning model. You will [explore the baseline](Explore-Baseline) results later in this notebook. ###Code from sagemaker import analytics experiment_name = 'mlops-{}'.format(model_name) model_analytics = analytics.ExperimentAnalytics(experiment_name=experiment_name) analytics_df = model_analytics.dataframe() if (analytics_df.shape[0] == 0): raise(Exception('Please wait. No training or baseline jobs')) pd.set_option('display.max_colwidth', 100) # Increase column width to show full copmontent name cols = ['TrialComponentName', 'DisplayName', 'SageMaker.InstanceType', 'train:rmse - Last', 'validation:rmse - Last'] # return the last rmse for training and validation analytics_df[analytics_df.columns & cols].head(2) ###Output _____no_output_____ ###Markdown Deploy Dev Test Dev DeploymentWhen the pipeline has finished training a model, it automatically moves to the next step, where the model is deployed as a SageMaker Endpoint. This endpoint is part of your dev deployment, therefore, in this section, you will run some tests on the endpoint to decide if you want to deploy this model into production.First, run the cell below to fetch the name of the SageMaker Endpoint. ###Code codepipeline = boto3.client('codepipeline') deploy_dev = get_pipeline_stage(pipeline_name, 'DeployDev') if not 'latestExecution' in deploy_dev: raise(Exception('Please wait. Deploy dev not started')) execution_id = deploy_dev['latestExecution']['pipelineExecutionId'] dev_endpoint_name = 'mlops-{}-dev-{}'.format(model_name, execution_id) print('endpoint name: {}'.format(dev_endpoint_name)) ###Output _____no_output_____ ###Markdown If you moved through the previous section very quickly, you will need to wait until the dev endpoint has been successfully deployed and the pipeline is waiting for approval to deploy to production (see screenshot). It can take up to 10 minutes for SageMaker to create an endpoint.![Deploying dev endpoint in code pipeline](../docs/dev-deploy-ready.png)Alternatively, run the code below to check the status of your endpoint. Wait until the status of the endpoint is 'InService'. ###Code sm = boto3.client('sagemaker') while True: try: response = sm.describe_endpoint(EndpointName=dev_endpoint_name) print("Endpoint status: {}".format(response['EndpointStatus'])) if response['EndpointStatus'] == 'InService': break except ClientError as e: print(e.response["Error"]["Message"]) time.sleep(10) ###Output _____no_output_____ ###Markdown Now that your endpoint is ready, let's write some code to run the test data (which you split off from the dataset and saved to file at the start of this notebook) through the endpoint for inference. The code below supports both v1 and v2 of the SageMaker SDK, but we recommend using v2 of the SDK in all of your future projects. ###Code import numpy as np from tqdm import tqdm try: # Support SageMaker v2 SDK: https://sagemaker.readthedocs.io/en/stable/v2.html from sagemaker.predictor import Predictor from sagemaker.serializers import CSVSerializer def get_predictor(endpoint_name): xgb_predictor = Predictor(endpoint_name) xgb_predictor.serializer = CSVSerializer() return xgb_predictor except: # Fallback to SageMaker v1.70 SDK from sagemaker.predictor import RealTimePredictor, csv_serializer def get_predictor(endpoint_name): xgb_predictor = RealTimePredictor(endpoint_name) xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer return xgb_predictor def predict(predictor, data, rows=500): split_array = np.array_split(data, round(data.shape[0] / float(rows))) predictions = '' for array in tqdm(split_array): predictions = ','.join([predictions, predictor.predict(array).decode('utf-8')]) return np.fromstring(predictions[1:], sep=',') ###Output _____no_output_____ ###Markdown Now use the `predict` function, which was defined in the code above, to run the test data through the endpoint and generate the predictions. ###Code dev_predictor = get_predictor(dev_endpoint_name) predictions = predict(dev_predictor, test_df[test_df.columns[1:]].values) ###Output _____no_output_____ ###Markdown Next, load the predictions into a data frame, and join it with your test data. Then, calculate absolute error as the difference between the actual taxi fare and the predicted taxi fare. Display the results in a table, sorted by the highest absolute error values. ###Code pred_df = pd.DataFrame({'total_amount_predictions': predictions }) pred_df = test_df.join(pred_df) # Join on all pred_df['error'] = abs(pred_df['total_amount']-pred_df['total_amount_predictions']) pred_df.sort_values('error', ascending=False).head() ###Output _____no_output_____ ###Markdown From this table, we note that some short trip distances have large errors because the low predicted fare does not match the high actual fare. This could be the result of a generous tip which we haven't included in this dataset.You can also analyze the results by plotting the absolute error to visualize outliers. In this graph, we see that most of the outliers are cases where the model predicted a much lower fare than the actual fare. There are only a few outliers where the model predicted a higher fare than the actual fare. ###Code sns.scatterplot(data=pred_df, x='total_amount_predictions', y='total_amount', hue='error') ###Output _____no_output_____ ###Markdown If you want one overall measure of quality for the model, you can calculate the root mean square error (RMSE) for the predicted fares compared to the actual fares. Compare this to the [results calculated on the validation set](validation-results) at the end of the 'Inspect Training Job' section. ###Code from math import sqrt from sklearn.metrics import mean_squared_error def rmse(pred_df): return sqrt(mean_squared_error(pred_df['total_amount'], pred_df['total_amount_predictions'])) print('RMSE: {}'.format(rmse(pred_df))) ###Output _____no_output_____ ###Markdown Deploy Prod Approve Deployment to ProductionIf you are happy with the results of the model, you can go ahead and approve the model to be deployed into production. You can do so by clicking the **Review** button in the CodePipeline UI, leaving a comment to explain why you approve this model, and clicking on **Approve**. Alternatively, you can create a Jupyter widget which (when enabled) allows you to comment and approve the model directly from this notebook. Run the cell below to see this in action. ###Code import ipywidgets as widgets def on_click(obj): result = { 'summary': approval_text.value, 'status': obj.description } response = codepipeline.put_approval_result( pipelineName=pipeline_name, stageName='DeployDev', actionName='ApproveDeploy', result=result, token=approval_action['token'] ) button_box.close() print(result) # Create the widget if we are ready for approval deploy_dev = get_pipeline_stage(pipeline_name, 'DeployDev') if not 'latestExecution' in deploy_dev['actionStates'][-1]: raise(Exception('Please wait. Deploy dev not complete')) approval_action = deploy_dev['actionStates'][-1]['latestExecution'] if approval_action['status'] == 'Succeeded': print('Dev approved: {}'.format(approval_action['summary'])) elif 'token' in approval_action: approval_text = widgets.Text(placeholder='Optional approval message') approve_btn = widgets.Button(description="Approved", button_style='success', icon='check') reject_btn = widgets.Button(description="Rejected", button_style='danger', icon='close') approve_btn.on_click(on_click) reject_btn.on_click(on_click) button_box = widgets.HBox([approval_text, approve_btn, reject_btn]) display(button_box) else: raise(Exception('Please wait. No dev approval')) ###Output _____no_output_____ ###Markdown Test Production DeploymentWithin about a minute after approving the model deployment, you should see the pipeline start on the final step: deploying your model into production. In this section, you will check the deployment status and test the production endpoint after it has been deployed.![Deploy production endpoint in code pipeline](../docs/deploy-production.png)This step of the pipeline uses CloudFormation to deploy a number of resources on your behalf. In particular, it creates:1. A production-ready SageMaker Endpoint for your model, with [data capture](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-data-capture.html)⇗ (used by SageMaker Model Monitor) and [autoscaling](https://docs.aws.amazon.com/sagemaker/latest/dg/endpoint-auto-scaling.html)⇗ enabled.1. A [model monitoring schedule](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-scheduling.html)⇗ which outputs the results to CloudWatch metrics, along with a [CloudWatch Alarm](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html)⇗ which will notify you when a violation occurs. 1. A CodeDeploy instance which creates a simple app by deploying API Gateway, three Lambda functions, and an alarm to notify of the success or failure of this deployment. The code for the Lambda functions can be found in `api/app.py`, `api/pre_traffic_hook.py`, and `api/post_traffic_hook.py`. These functions update the endpoint to enable data capture, format and submit incoming traffic to the SageMaker endpoint, and capture the data logs.![Components of production deployment](../docs/cloud-formation.png)Let's check how the deployment is progressing. Use the code below to fetch the execution ID of the depoyment step. Then generate a table which lists the resources created by the CloudFormation stack and their creation status. You can re-run the cell after a few minutes to see how the steps are progressing. ###Code deploy_prd = get_pipeline_stage(pipeline_name, 'DeployPrd') if not 'latestExecution' in deploy_prd or not 'latestExecution' in deploy_prd['actionStates'][0]: raise(Exception('Please wait. Deploy prd not started')) execution_id = deploy_prd['latestExecution']['pipelineExecutionId'] from datetime import datetime, timedelta from dateutil.tz import tzlocal def get_event_dataframe(events): stack_cols = ['LogicalResourceId', 'ResourceStatus', 'ResourceStatusReason', 'Timestamp'] stack_event_df = pd.DataFrame(events)[stack_cols].fillna('') stack_event_df['TimeAgo'] = (datetime.now(tzlocal())-stack_event_df['Timestamp']) return stack_event_df.drop('Timestamp', axis=1) cfn = boto3.client('cloudformation') stack_name = stack_name='{}-deploy-prd'.format(pipeline_name) print('stack name: {}'.format(stack_name)) # Get latest stack events while True: try: response = cfn.describe_stack_events(StackName=stack_name) break except ClientError as e: print(e.response["Error"]["Message"]) time.sleep(10) get_event_dataframe(response['StackEvents']).head() ###Output _____no_output_____ ###Markdown The resource of most interest to us is the endpoint. This takes on average 10 minutes to deploy. In the meantime, you can take a look at the Python code used for the application. The `app.py` is the main entry point invoking the Amazon SageMaker endpoint. It returns results along with a custom header for the endpoint we invoked. ###Code !pygmentize ../api/app.py ###Output _____no_output_____ ###Markdown The `pre_traffic_hook.py` lambda is invoked prior to deployment and confirms the endpoint has data capture enabled. ###Code !pygmentize ../api/pre_traffic_hook.py ###Output _____no_output_____ ###Markdown The `post_traffic_hook.py` lambda is invoked to perform any final checks, in this case to verify that we have received log data from data capature. ###Code !pygmentize ../api/post_traffic_hook.py ###Output _____no_output_____ ###Markdown Use the code below to fetch the name of the endpoint, then run a loop to wait for the endpoint to be fully deployed. You need the status to be 'InService'. ###Code prd_endpoint_name='mlops-{}-prd-{}'.format(model_name, execution_id) print('prod endpoint: {}'.format(prd_endpoint_name)) sm = boto3.client('sagemaker') while True: try: response = sm.describe_endpoint(EndpointName=prd_endpoint_name) print("Endpoint status: {}".format(response['EndpointStatus'])) # Wait until the endpoint is in service with data capture enabled if response['EndpointStatus'] == 'InService' \ and 'DataCaptureConfig' in response \ and response['DataCaptureConfig']['EnableCapture']: break except ClientError as e: print(e.response["Error"]["Message"]) time.sleep(10) ###Output _____no_output_____ ###Markdown When the endpoint status is 'InService', you can continue. Earlier in this notebook, you created some code to send data to the dev endpoint. Reuse this code now to send a sample of the test data to the production endpoint. Since data capture is enabled on this endpoint, you want to send single records at a time, so the model monitor can map these records to the baseline. You will [inspect the model monitor](Inspect-Model-Monitor) later in this notebook. For now, just check if you can send data to the endpoint and receive predictions in return. ###Code prd_predictor = get_predictor(prd_endpoint_name) sample_values = test_df[test_df.columns[1:]].sample(100).values predictions = predict(prd_predictor, sample_values, rows=1) predictions ###Output _____no_output_____ ###Markdown Test REST APIAlthough you already tested the SageMaker endpoint in the previous section, it is also a good idea to test the application created with API Gateway. ![Traffic shift between endpoints](../docs/lambda-deploy-create.png)Follow the link below to open the Lambda Deployment where you can see the in-progress and completed deployments. You can also click to expand the **SAM template** to see the packaged CloudFormation template used in the deployment. ###Code HTML('<a target="_blank" href="https://{0}.console.aws.amazon.com/lambda/home?region={0}#/applications/{1}-deploy-prd?tab=deploy">Lambda Deployment</a>'.format(region, model_name)) ###Output _____no_output_____ ###Markdown Run the code below to confirm that the endpoint is in service. It will complete once the REST API is available. ###Code def get_stack_status(stack_name): response = cfn.describe_stacks(StackName=stack_name) if response['Stacks']: stack = response['Stacks'][0] outputs = None if 'Outputs' in stack: outputs = dict([(o['OutputKey'], o['OutputValue']) for o in stack['Outputs']]) return stack['StackStatus'], outputs outputs = None while True: try: status, outputs = get_stack_status(stack_name) response = sm.describe_endpoint(EndpointName=prd_endpoint_name) print("Endpoint status: {}".format(response['EndpointStatus'])) if outputs: break elif status.endswith('FAILED'): raise(Exception('Stack status: {}'.format(status))) except ClientError as e: print(e.response["Error"]["Message"]) time.sleep(10) if outputs: print('deployment application: {}'.format(outputs['DeploymentApplication'])) print('rest api: {}'.format(outputs['RestApi'])) ###Output _____no_output_____ ###Markdown If you are performing an update on your production deployment as a result of running [Trigger Retraining](Trigger-Retraining) you will then be able to expand the Lambda Deployment tab to reveal the resources. Click on the **ApiFunctionAliaslive** link to see the Lambda Deployment in progress. ![Traffic shift between endpoints](../docs/lambda-deploy-update.png)This page will be updated to list the deployment events. It also has a link to the Deployment Application which you can access in the output of the next cell. ###Code HTML('<a target="_blank" href="https://{0}.console.aws.amazon.com/codesuite/codedeploy/applications/{1}?region={0}">CodeDeploy application</a>'.format(region, outputs['DeploymentApplication'])) ###Output _____no_output_____ ###Markdown CodeDeploy will perform a canary deployment and send 10% of the traffic to the new endpoint over a 5-minute period.![Traffic shift between endpoints](../docs/code-deploy.gif) We can invoke the REST API and inspect the headers being returned to see which endpoint we are hitting. You will occasionally see the cell below show a different endpoint that settles to the new version once the stack is complete. ###Code %%time from urllib import request headers = {"Content-type": "text/csv"} payload = test_df[test_df.columns[1:]].head(1).to_csv(header=False, index=False).encode('utf-8') rest_api = outputs['RestApi'] while True: try: resp = request.urlopen(request.Request(rest_api, data=payload, headers=headers)) print("Response code: %d: endpoint: %s" % (resp.getcode(), resp.getheader('x-sagemaker-endpoint'))) status, outputs = get_stack_status(stack_name) if status.endswith('COMPLETE'): print('Deployment complete\n') break elif status.endswith('FAILED'): raise(Exception('Stack status: {}'.format(status))) except ClientError as e: print(e.response["Error"]["Message"]) time.sleep(10) ###Output _____no_output_____ ###Markdown Monitor Inspect Model MonitorWhen you prepared the datasets for model training at the start of this notebook, you saved a baseline dataset (a copy of the train dataset). Then, when you approved the model for deployment into production, the pipeline set up an SageMaker Endpoint with data capture enabled and a model monitoring schedule. In this section, you will take a closer look at the model monitor results.To start off, fetch the latest production deployment execution ID. ###Code deploy_prd = get_pipeline_stage(pipeline_name, 'DeployPrd') if not 'latestExecution' in deploy_prd: raise(Exception('Please wait. Deploy prod not complete')) execution_id = deploy_prd['latestExecution']['pipelineExecutionId'] ###Output _____no_output_____ ###Markdown Under the hood, SageMaker model monitor runs in SageMaker processing jobs. Use the execution ID to fetch the names of the processing job and the schedule. ###Code processing_job_name='mlops-{}-pbl-{}'.format(model_name, execution_id) schedule_name='mlops-{}-pms'.format(model_name) print('processing job name: {}'.format(processing_job_name)) print('schedule name: {}'.format(schedule_name)) ###Output _____no_output_____ ###Markdown Explore BaselineNow fetch the baseline results from the processing job. This cell will throw an exception if the processing job is not complete - if that happens, just wait several minutes and try again. ###Code import sagemaker from sagemaker.model_monitor import BaseliningJob, MonitoringExecution from sagemaker.s3 import S3Downloader sagemaker_session = sagemaker.Session() baseline_job = BaseliningJob.from_processing_name(sagemaker_session, processing_job_name) status = baseline_job.describe()['ProcessingJobStatus'] if status != 'Completed': raise(Exception('Please wait. Processing job not complete, status: {}'.format(status))) baseline_results_uri = baseline_job.outputs[0].destination ###Output _____no_output_____ ###Markdown SageMaker model monitor generates two types of files. Take a look at the statistics file first. It calculates various statistics for each feature of the dataset, including the mean, standard deviation, minimum value, maximum value, and more. ###Code import pandas as pd import json baseline_statistics = baseline_job.baseline_statistics().body_dict schema_df = pd.json_normalize(baseline_statistics["features"]) schema_df[["name", "numerical_statistics.mean", "numerical_statistics.std_dev", "numerical_statistics.min", "numerical_statistics.max"]].head() ###Output _____no_output_____ ###Markdown Now look at the suggested [constraints files](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-byoc-constraints.html)⇗. As the name implies, these are constraints which SageMaker model monitor recommends. If the live data which is sent to your production SageMaker Endpoint violates these constraints, this indicates data drift, and model monitor can raise an alert to trigger retraining. Of course, you can set different constraints based on the statistics which you viewed previously. ###Code baseline_constraints = baseline_job.suggested_constraints().body_dict constraints_df = pd.json_normalize(baseline_constraints["features"]) constraints_df.head() ###Output _____no_output_____ ###Markdown View data captureWhen the "Deploy Production" stage of the MLOps pipeline deploys a SageMaker endpoint, it also enables data capture. This means the incoming requests to the endpoint, as well as the results from the ML model, are stored in an S3 location. Model monitor can analyze this data and compare it to the baseline to ensure that no constraints are violated. Use the code below to check how many files have been created by the data capture, and view the latest file in detail. Note, data capture relies on data being sent to the production endpoint. If you don't see any files yet, wait several minutes and try again. ###Code bucket = sagemaker_session.default_bucket() data_capture_logs_uri = 's3://{}/{}/datacapture/{}'.format(bucket, model_name, prd_endpoint_name) capture_files = S3Downloader.list(data_capture_logs_uri) print('Found {} files'.format(len(capture_files))) if capture_files: # Get the first line of the most recent file event = json.loads(S3Downloader.read_file(capture_files[-1]).split('\n')[0]) print('\nLast file:\n{}'.format(json.dumps(event, indent=2))) ###Output _____no_output_____ ###Markdown View monitoring scheduleThere are some useful functions for plotting and rendering distribution statistics or constraint violations provided in a `utils` file in the [SageMaker Examples GitHub](https://github.com/aws/amazon-sagemaker-examples/tree/master/sagemaker_model_monitor/visualization)⇗. Grab a copy of this code to use in this notebook. ###Code !wget -O utils.py --quiet https://raw.githubusercontent.com/awslabs/amazon-sagemaker-examples/master/sagemaker_model_monitor/visualization/utils.py import utils as mu ###Output _____no_output_____ ###Markdown The [minimum scheduled run time](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-scheduling.html)⇗ for model monitor is one hour, which means you will need to wait at least an hour to see any results. Use the code below to check the schedule status and list the next run. If you are completing this notebook as part of a workshop, your host will have activities which you can complete while you wait. ###Code sm = boto3.client('sagemaker') response = sm.describe_monitoring_schedule(MonitoringScheduleName=schedule_name) print('Schedule Status: {}'.format(response['MonitoringScheduleStatus'])) now = datetime.now(tzlocal()) next_hour = (now+timedelta(hours=1)).replace(minute=0) scheduled_diff = (next_hour-now).seconds//60 print('Next schedule in {} minutes'.format(scheduled_diff)) ###Output _____no_output_____ ###Markdown While you wait, you can take a look at the CloudFormation template which is used as a base for the CloudFormation template built by CodeDeploy to deploy the production application. Alterntively, you can jump ahead to [Trigger Retraining](Trigger-Retraining) which will kick off another run of the code pipeline whilst you wait. ###Code !cat ../assets/deploy-model-prd.yml ###Output _____no_output_____ ###Markdown A couple of minutes after the model monitoring schedule has run, you can use the code below to fetch the latest schedule status. A completed schedule run may have found violations. ###Code processing_job_arn = None while processing_job_arn == None: try: response = sm.list_monitoring_executions(MonitoringScheduleName=schedule_name) except ClientError as e: print(e.response["Error"]["Message"]) for mon in response['MonitoringExecutionSummaries']: status = mon['MonitoringExecutionStatus'] now = datetime.now(tzlocal()) created_diff = (now-mon['CreationTime']).seconds//60 print('Schedule status: {}, Created: {} minutes ago'.format(status, created_diff)) if status in ['Completed', 'CompletedWithViolations']: processing_job_arn = mon['ProcessingJobArn'] break if status == 'InProgress': break else: raise(Exception('Please wait. No Schedules executing')) time.sleep(10) ###Output _____no_output_____ ###Markdown View monitoring resultsOnce the model monitoring schedule has had a chance to run at least once, you can take a look at the results. First, load the monitoring execution results from the latest scheduled run. ###Code if processing_job_arn: execution = MonitoringExecution.from_processing_arn(sagemaker_session=sagemaker.Session(), processing_job_arn=processing_job_arn) exec_inputs = {inp['InputName']: inp for inp in execution.describe()['ProcessingInputs']} exec_results_uri = execution.output.destination print('Monitoring Execution results: {}'.format(exec_results_uri)) ###Output _____no_output_____ ###Markdown Take a look at the files which have been saved in the S3 output location. If violations were found, you should see a constraint violations file in addition to the statistics and constraints file which you viewed before. ###Code !aws s3 ls $exec_results_uri/ ###Output _____no_output_____ ###Markdown Now, fetch the monitoring statistics and violations. Then use the utils code to visualize the results in a table. It will highlight any baseline drift found by the model monitor. Drift can happen for categorical features (for inferred string styles) or for numerical features (e.g. total fare amount). ###Code # Get the baseline and monitoring statistics & violations baseline_statistics = baseline_job.baseline_statistics().body_dict execution_statistics = execution.statistics().body_dict violations = execution.constraint_violations().body_dict['violations'] mu.show_violation_df(baseline_statistics=baseline_statistics, latest_statistics=execution_statistics, violations=violations) ###Output _____no_output_____ ###Markdown Trigger RetrainingThe CodePipeline instance is configured with [CloudWatch Events](https://docs.aws.amazon.com/codepipeline/latest/userguide/create-cloudtrail-S3-source.html)⇗ to start the pipeline for retraining when the drift detection triggers specific metric alarms.You can simulate drift by putting a metric value above the threshold of `0.2` directly into CloudWatch. This will trigger the alarm, and start the code pipeline. Tip: This alarm is configured only for the latest production endpoint, so re-training will only occur if you are putting metrics against the latest endpoint.![Metric graph in CloudWatch](../docs/cloudwatch-alarm.png)Run the code below to trigger the metric alarm. The cell output will be a link to CloudWatch, where you can see the alarm (similar to the screenshot above), and a link to CodePipeline which you will see run again. Note that it can take a couple of minutes for everything to trigger. ###Code from datetime import datetime import random cloudwatch = boto3.client('cloudwatch') # Define the metric name and threshold metric_name = 'feature_baseline_drift_total_amount' metric_threshold = 0.2 # Put a new metric to trigger an alaram def put_drift_metric(value): print('Putting metric: {}'.format(value)) response = cloudwatch.put_metric_data( Namespace='aws/sagemaker/Endpoints/data-metrics', MetricData=[ { 'MetricName': metric_name, 'Dimensions': [ { 'Name': 'MonitoringSchedule', 'Value': schedule_name }, { 'Name': 'Endpoint', 'Value': prd_endpoint_name }, ], 'Timestamp': datetime.now(), 'Value': value, 'Unit': 'None' }, ] ) def get_drift_stats(): response = cloudwatch.get_metric_statistics( Namespace='aws/sagemaker/Endpoints/data-metrics', MetricName=metric_name, Dimensions=[ { 'Name': 'MonitoringSchedule', 'Value': schedule_name }, { 'Name': 'Endpoint', 'Value': prd_endpoint_name }, ], StartTime=datetime.now() - timedelta(minutes=2), EndTime=datetime.now(), Period=1, Statistics=['Average'], Unit='None' ) if 'Datapoints' in response and len(response['Datapoints']) > 0: return response['Datapoints'][0]['Average'] return 0 print('Simluate drift on endpoint: {}'.format(prd_endpoint_name)) while True: put_drift_metric(round(random.uniform(metric_threshold, 1.0), 4)) drift_stats = get_drift_stats() print('Average drift amount: {}'.format(get_drift_stats())) if drift_stats > metric_threshold: break time.sleep(1) ###Output _____no_output_____ ###Markdown Click through to the Alarm and CodePipeline Execution history with the links below. ###Code # Output a html link to the cloudwatch dashboard metric_alarm_name = 'mlops-{}-metric-gt-threshold'.format(model_name) HTML('''<a target="_blank" href="https://{0}.console.aws.amazon.com/cloudwatch/home?region={0}#alarmsV2:alarm/{1}">CloudWatch Alarm</a> triggers <a target="_blank" href="https://{0}.console.aws.amazon.com/codesuite/codepipeline/pipelines/{2}/executions?region={0}">Code Pipeline Execution</a>'''.format(region, metric_alarm_name, pipeline_name)) ###Output _____no_output_____ ###Markdown Once the pipeline is running again you can jump back up to [Inspect Training Job](Inspect-Training-Job) Create Synthetic Monitoring[Amazon CloudWatch Synthetics](https://aws.amazon.com/blogs/aws/new-use-cloudwatch-synthetics-to-monitor-sites-api-endpoints-web-workflows-and-more/) allows you to monitor sites, REST APIs, and other services deployed on AWS. You can set up a canary to test that your REST API is returning an expected value at a regular interval. This is a great way to validate that the blue/green deployment is not causing any downtime for your end-users.Use the code below to set up a canary to continuously test the production deployment. This canary simply pings the REST API to test if it is live, using code from `notebook/canary.js`. ###Code from urllib.parse import urlparse from string import Template from io import BytesIO import zipfile # Format the canary_js with rest_api and payload rest_url = urlparse(rest_api) with open('canary.js') as f: canary_js = Template(f.read()).substitute(hostname=rest_url.netloc, path=rest_url.path, data=payload.decode('utf-8').strip()) # Write the zip file zip_buffer = BytesIO() with zipfile.ZipFile(zip_buffer, 'w') as zf: zip_path = 'nodejs/node_modules/apiCanaryBlueprint.js' # Set a valid path zip_info = zipfile.ZipInfo(zip_path) zip_info.external_attr = 0o0755 << 16 # Ensure the file is readable zf.writestr(zip_info, canary_js) zip_buffer.seek(0) # Create the canary synth = boto3.client('synthetics') role = sagemaker.get_execution_role() s3_canary_uri = 's3://{}/{}'.format(artifact_bucket, model_name) canary_name = 'mlops-{}'.format(model_name) try: response = synth.create_canary( Name=canary_name, Code={ 'ZipFile': bytearray(zip_buffer.read()), 'Handler': 'apiCanaryBlueprint.handler' }, ArtifactS3Location=s3_canary_uri, ExecutionRoleArn=role, Schedule={ 'Expression': 'rate(10 minutes)', 'DurationInSeconds': 0 }, RunConfig={ 'TimeoutInSeconds': 60, 'MemoryInMB': 960 }, SuccessRetentionPeriodInDays=31, FailureRetentionPeriodInDays=31, RuntimeVersion='syn-nodejs-2.0', ) print('Creating canary: {}'.format(canary_name)) except ClientError as e: if e.response["Error"]["Code"] == "AccessDeniedException": print('Canary not supported.') # Not supported in event engine else: raise(e) ###Output _____no_output_____ ###Markdown Now create a CloudWatch alarm which will trigger if the success rate of the canary drops below 90%. ###Code cloudwatch = boto3.client('cloudwatch') canary_alarm_name = '{}-synth-lt-threshold'.format(canary_name) response = cloudwatch.put_metric_alarm( AlarmName=canary_alarm_name, ComparisonOperator='LessThanThreshold', EvaluationPeriods=1, DatapointsToAlarm=1, Period=600, # 10 minute interval Statistic='Average', Threshold=90.0, ActionsEnabled=False, AlarmDescription='SuccessPercent LessThanThreshold 90%', Namespace='CloudWatchSynthetics', MetricName='SuccessPercent', Dimensions=[ { 'Name': 'CanaryName', 'Value': canary_name }, ], Unit='Seconds' ) print('Creating alarm: {}'.format(canary_alarm_name)) ###Output _____no_output_____ ###Markdown Run the code below to check if the canary is running succesfully. The cell will output a link to your CloudWatch Canaries UI, where you can watch the results over time (see screenshot). It can take a couple of minutes for the canary to deploy.![Canary graph in CloudWatch](../docs/canary-green-1hr.png) ###Code while True: try: response = synth.get_canary(Name=canary_name) status = response['Canary']['Status']['State'] print('Canary status: {}'.format(status)) if status == 'ERROR': raise(Exception(response['Canary']['Status']['StateReason'])) elif status == 'READY': synth.start_canary(Name=canary_name) elif status == 'RUNNING': break except ClientError as e: if e.response["Error"]["Code"] == "ResourceNotFoundException": print('No canary found.') break elif e.response["Error"]["Code"] == "AccessDeniedException": print('Canary not supported.') # Not supported in event engine break print(e.response["Error"]["Message"]) time.sleep(10) # Output a html link to the cloudwatch console HTML('<a target="_blank" href="https://{0}.console.aws.amazon.com/cloudwatch/home?region={0}#synthetics:canary/detail/{1}">CloudWatch Canary</a>'.format(region, canary_name)) ###Output _____no_output_____ ###Markdown Create a CloudWatch dashboardFinally, use the code below to create a CloudWatch dashboard to visualize the key performance metrics and alarms which you have created during this demo. The cell will output a link to the dashboard. This dashboard shows 9 charts in three rows, where the first row displays Lambda metrics, the second row displays SageMaker metrics, and the third row (shown in the screenshot below) displays the alarms set up for the pipeline.![Graphs in CloudWatch dashboard](../docs/cloudwatch-dashboard.png) ###Code sts = boto3.client('sts') account_id = sts.get_caller_identity().get('Account') dashboard_name = 'mlops-{}'.format(model_name) with open('dashboard.json') as f: dashboard_body = Template(f.read()).substitute(region=region, account_id=account_id, model_name=model_name) response = cloudwatch.put_dashboard( DashboardName=dashboard_name, DashboardBody=dashboard_body ) # Output a html link to the cloudwatch dashboard HTML('<a target="_blank" href="https://{0}.console.aws.amazon.com/cloudwatch/home?region={0}#dashboards:name={1}">CloudWatch Dashboard</a>'.format(region, canary_name)) ###Output _____no_output_____ ###Markdown Congratulations! You have made it to the end of this notebook, and have automated a safe MLOps pipeline using a wide range of AWS services. You can use the other notebook in this repository [workflow.ipynb](workflow.ipynb) to implement your own ML model and deploy it as part of this pipeline. Or, if you are finished with the content, follow the instructions in the next section to clean up the resources you have deployed. CleanupExecute the following cell to delete the stacks created in the pipeline. For a model name of **nyctaxi** these would be:1. *nyctaxi*-deploy-prd2. *nyctaxi*-deploy-dev3. *nyctaxi*-workflow4. sagemaker-custom-resource ###Code cfn = boto3.client('cloudformation') # Delete the prod and then dev stack for stack_name in [f'{pipeline_name}-deploy-prd', f'{pipeline_name}-deploy-dev', f'{pipeline_name}-workflow', 'sagemaker-custom-resource']: print('Deleting stack: {}'.format(stack_name)) cfn.delete_stack(StackName=stack_name) cfn.get_waiter('stack_delete_complete').wait(StackName=stack_name) ###Output _____no_output_____ ###Markdown The following code will stop and delete the canary you created. ###Code while True: try: response = synth.get_canary(Name=canary_name) status = response['Canary']['Status']['State'] print('Canary status: {}'.format(status)) if status == 'ERROR': raise(Exception(response['Canary']['Status']['StateReason'])) elif status == 'STOPPED': synth.delete_canary(Name=canary_name) elif status == 'RUNNING': synth.stop_canary(Name=canary_name) except ClientError as e: if e.response["Error"]["Code"] == "ResourceNotFoundException": print('Canary succesfully deleted.') break elif e.response["Error"]["Code"] == "AccessDeniedException": print('Canary not created.') # Not supported in event engine break print(e.response["Error"]["Message"]) time.sleep(10) ###Output _____no_output_____ ###Markdown The following code will delete the dashboard. ###Code cloudwatch.delete_alarms(AlarmNames=[canary_alarm_name]) print('Alarm deleted') cloudwatch.delete_dashboards(DashboardNames=[dashboard_name]) print('Dashboard deleted') ###Output _____no_output_____
docs/notebooks/StackInspector.ipynb
###Markdown Inspecting Call StacksIn this book, for many purposes, we need to lookup a function's location, source code, or simply definition. The class `StackInspector` provides a number of convenience methods for this purpose. **Prerequisites*** This is an internal helper class.* Understanding how frames and local variables are represented in Python helps. SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.StackInspector import ```and then make use of the following features.`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``.```python>>> class StackInspectorDemo(StackInspector):>>> def callee(self) -> None:>>> func = self.caller_function()>>> assert func.__name__ == 'test'>>> print(func)>>> >>> def caller(self) -> None:>>> self.callee()>>> def test() -> None:>>> demo = StackInspectorDemo()>>> demo.caller()>>> test()```Here are all methods defined in this chapter:![](PICS/StackInspector-synopsis-1.svg) Inspecting Call Stacks`StackInspector` is a class that provides a number of utility functions to inspect a [call stack](https://en.wikipedia.org/wiki/Call_stack), notably to identify caller functions. When tracing or instrumenting functions, a common issue is to identify the currently active functions. A typical situation is depicted below, where `my_inspector()` currently traces a function called `function_under_test()`.| Function | Class | || --- | --- | --- || ... | `StackInspector` | || `caller_frame()` | `StackInspector` | invokes $\uparrow$ || `caller_function()` | `StackInspector` | invokes $\uparrow$ || `my_inspector()` | some inspector; a subclass of `StackInspector` | invokes $\uparrow$ || `function_under_test()` | (any) | is traced by $\uparrow$ || -/- | (any) | invokes $\uparrow$ |To determine the calling function, `my_inspector()` could check the current frame and retrieve the frame of the caller. However, this caller could be some tracing function again invoking `my_inspector()`. Therefore, `StackInspector` provides a method `caller_function()` that returns the first caller outside of a `StackInspector` class. This way, a subclass of `StackInspector` can define an arbitrary set of functions (and call stack); `caller_function()` will always return a function outside of the `StackInspector` subclass. ###Code import bookutils import inspect import warnings from types import FunctionType, FrameType, TracebackType # ignore from typing import cast, Dict, Any, Tuple, Callable, Optional, Type ###Output _____no_output_____ ###Markdown The method `caller_frame()` walks the current call stack from the current frame towards callers (using the `f_back` attribute of the current frame) and returns the first frame that is _not_ a method or function from the current `StackInspector` class or its subclass. To determine this, the method `our_frame()` determines whether the given execution frame refers to one of the methods of `StackInspector` or one of its subclasses. ###Code class StackInspector: """Provide functions to inspect the stack""" def caller_frame(self) -> FrameType: """Return the frame of the caller.""" # Walk up the call tree until we leave the current class frame = cast(FrameType, inspect.currentframe()) while self.our_frame(frame): frame = cast(FrameType, frame.f_back) return frame def our_frame(self, frame: FrameType) -> bool: """Return true if `frame` is in the current (inspecting) class.""" return isinstance(frame.f_locals.get('self'), self.__class__) ###Output _____no_output_____ ###Markdown When we access program state or execute functions, we do so in the caller's environment, not ours. The `caller_globals()` method acts as replacement for `globals()`, using `caller_frame()`. ###Code class StackInspector(StackInspector): def caller_globals(self) -> Dict[str, Any]: """Return the globals() environment of the caller.""" return self.caller_frame().f_globals def caller_locals(self) -> Dict[str, Any]: """Return the locals() environment of the caller.""" return self.caller_frame().f_locals ###Output _____no_output_____ ###Markdown The method `caller_location()` returns the caller's function and its location. It does a fair bit of magic to retrieve nested functions, by looking through global and local variables until a match is found. This may be simplified in the future. ###Code Location = Tuple[Callable, int] class StackInspector(StackInspector): def caller_location(self) -> Location: """Return the location (func, lineno) of the caller.""" return self.caller_function(), self.caller_frame().f_lineno ###Output _____no_output_____ ###Markdown The function `search_frame()` allows to search for an item named `name`, walking up the call stack. This is handy when trying to find local functions during tracing, for whom typically only the name is provided. ###Code class StackInspector(StackInspector): def search_frame(self, name: str, frame: Optional[FrameType] = None) -> \ Tuple[Optional[FrameType], Optional[Callable]]: """ Return a pair (`frame`, `item`) in which the function `name` is defined as `item`. """ if frame is None: frame = self.caller_frame() while frame: item = None if name in frame.f_globals: item = frame.f_globals[name] if name in frame.f_locals: item = frame.f_locals[name] if item and callable(item): return frame, item frame = cast(FrameType, frame.f_back) return None, None def search_func(self, name: str, frame: Optional[FrameType] = None) -> \ Optional[Callable]: """Search in callers for a definition of the function `name`""" frame, func = self.search_frame(name, frame) return func ###Output _____no_output_____ ###Markdown If we cannot find a function by name, we can create one, using `create_function()`. ###Code class StackInspector(StackInspector): # Avoid generating functions more than once _generated_function_cache: Dict[Tuple[str, int], Callable] = {} def create_function(self, frame: FrameType) -> Callable: """Create function for given frame""" name = frame.f_code.co_name cache_key = (name, frame.f_lineno) if cache_key in self._generated_function_cache: return self._generated_function_cache[cache_key] try: # Create new function from given code generated_function = cast(Callable, FunctionType(frame.f_code, globals=frame.f_globals, name=name)) except TypeError: # Unsuitable code for creating a function # Last resort: Return some function generated_function = self.unknown except Exception as exc: # Any other exception warnings.warn(f"Couldn't create function for {name} " f" ({type(exc).__name__}: {exc})") generated_function = self.unknown self._generated_function_cache[cache_key] = generated_function return generated_function ###Output _____no_output_____ ###Markdown The method `caller_function()` puts all of these together, simply looking up and returning the currently calling function – and creating one if it cannot be found. ###Code class StackInspector(StackInspector): def caller_function(self) -> Callable: """Return the calling function""" frame = self.caller_frame() name = frame.f_code.co_name func = self.search_func(name) if func: return func if not name.startswith('<'): warnings.warn(f"Couldn't find {name} in caller") return self.create_function(frame) def unknown(self) -> None: # Placeholder for unknown functions pass ###Output _____no_output_____ ###Markdown The method `is_internal_error()` allows us to differentiate whether some exception was raised by `StackInspector` (or a subclass) – or whether it was raised by the inspected code. ###Code import traceback class StackInspector(StackInspector): def is_internal_error(self, exc_tp: Type, exc_value: BaseException, exc_traceback: TracebackType) -> bool: """Return True if exception was raised from `StackInspector` or a subclass.""" if not exc_tp: return False for frame, lineno in traceback.walk_tb(exc_traceback): if self.our_frame(frame): return True return False ###Output _____no_output_____ ###Markdown Synopsis`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``. ###Code class StackInspectorDemo(StackInspector): def callee(self) -> None: func = self.caller_function() assert func.__name__ == 'test' print(func) def caller(self) -> None: self.callee() def test() -> None: demo = StackInspectorDemo() demo.caller() test() ###Output <function test at 0x7f9e7adaa6a8> ###Markdown Here are all methods defined in this chapter: ###Code # ignore from ClassDiagram import display_class_hierarchy, class_tree # ignore display_class_hierarchy([StackInspector], abstract_classes=[ StackInspector, ], public_methods=[ StackInspector.caller_frame, StackInspector.caller_function, StackInspector.caller_globals, StackInspector.caller_locals, StackInspector.caller_location, StackInspector.search_frame, StackInspector.search_func, StackInspector.is_internal_error, StackInspector.our_frame, ], project='debuggingbook') ###Output _____no_output_____ ###Markdown Inspecting Call StacksIn this book, for many purposes, we need to lookup a function's location, source code, or simply definition. The class `StackInspector` provides a number of convenience methods for this purpose. **Prerequisites*** This is an internal helper class.* Understanding how frames and local variables are represented in Python helps. SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.StackInspector import ```and then make use of the following features.`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``.```python>>> class StackInspectorDemo(StackInspector):>>> def callee(self) -> None:>>> func = self.caller_function()>>> assert func.__name__ == 'test'>>> print(func)>>> >>> def caller(self) -> None:>>> self.callee()>>> def test() -> None:>>> demo = StackInspectorDemo()>>> demo.caller()>>> test()```Here are all methods defined in this chapter:![](PICS/StackInspector-synopsis-1.svg) Inspecting Call Stacks`StackInspector` is a class that provides a number of utility functions to inspect a [call stack](https://en.wikipedia.org/wiki/Call_stack), notably to identify caller functions. When tracing or instrumenting functions, a common issue is to identify the currently active functions. A typical situation is depicted below, where `my_inspector()` currently traces a function called `function_under_test()`.| Function | Class | || --- | --- | --- || ... | `StackInspector` | || `caller_frame()` | `StackInspector` | invokes $\uparrow$ || `caller_function()` | `StackInspector` | invokes $\uparrow$ || `my_inspector()` | some inspector; a subclass of `StackInspector` | invokes $\uparrow$ || `function_under_test()` | (any) | is traced by $\uparrow$ || -/- | (any) | invokes $\uparrow$ |To determine the calling function, `my_inspector()` could check the current frame and retrieve the frame of the caller. However, this caller could be some tracing function again invoking `my_inspector()`. Therefore, `StackInspector` provides a method `caller_function()` that returns the first caller outside of a `StackInspector` class. This way, a subclass of `StackInspector` can define an arbitrary set of functions (and call stack); `caller_function()` will always return a function outside of the `StackInspector` subclass. ###Code import bookutils import inspect import warnings from types import FunctionType, FrameType, TracebackType # ignore from typing import cast, Dict, Any, Tuple, Callable, Optional, Type ###Output _____no_output_____ ###Markdown The method `caller_frame()` walks the current call stack from the current frame towards callers (using the `f_back` attribute of the current frame) and returns the first frame that is _not_ a method or function from the current `StackInspector` class or its subclass. To determine this, the method `our_frame()` determines whether the given execution frame refers to one of the methods of `StackInspector` or one of its subclasses. ###Code class StackInspector: """Provide functions to inspect the stack""" def caller_frame(self) -> FrameType: """Return the frame of the caller.""" # Walk up the call tree until we leave the current class frame = cast(FrameType, inspect.currentframe()) while self.our_frame(frame): frame = cast(FrameType, frame.f_back) return frame def our_frame(self, frame: FrameType) -> bool: """Return true if `frame` is in the current (inspecting) class.""" return isinstance(frame.f_locals.get('self'), self.__class__) ###Output _____no_output_____ ###Markdown When we access program state or execute functions, we do so in the caller's environment, not ours. The `caller_globals()` method acts as replacement for `globals()`, using `caller_frame()`. ###Code class StackInspector(StackInspector): def caller_globals(self) -> Dict[str, Any]: """Return the globals() environment of the caller.""" return self.caller_frame().f_globals def caller_locals(self) -> Dict[str, Any]: """Return the locals() environment of the caller.""" return self.caller_frame().f_locals ###Output _____no_output_____ ###Markdown The method `caller_location()` returns the caller's function and its location. It does a fair bit of magic to retrieve nested functions, by looking through global and local variables until a match is found. This may be simplified in the future. ###Code Location = Tuple[Callable, int] class StackInspector(StackInspector): def caller_location(self) -> Location: """Return the location (func, lineno) of the caller.""" return self.caller_function(), self.caller_frame().f_lineno ###Output _____no_output_____ ###Markdown The function `search_frame()` allows to search for an item named `name`, walking up the call stack. This is handy when trying to find local functions during tracing, for whom typically only the name is provided. ###Code class StackInspector(StackInspector): def search_frame(self, name: str, frame: Optional[FrameType] = None) -> \ Tuple[Optional[FrameType], Optional[Callable]]: """ Return a pair (`frame`, `item`) in which the function `name` is defined as `item`. """ if frame is None: frame = self.caller_frame() while frame: item = None if name in frame.f_globals: item = frame.f_globals[name] if name in frame.f_locals: item = frame.f_locals[name] if item and callable(item): return frame, item frame = cast(FrameType, frame.f_back) return None, None def search_func(self, name: str, frame: Optional[FrameType] = None) -> \ Optional[Callable]: """Search in callers for a definition of the function `name`""" frame, func = self.search_frame(name, frame) return func ###Output _____no_output_____ ###Markdown If we cannot find a function by name, we can create one, using `create_function()`. ###Code class StackInspector(StackInspector): # Avoid generating functions more than once _generated_function_cache: Dict[Tuple[str, int], Callable] = {} def create_function(self, frame: FrameType) -> Callable: """Create function for given frame""" name = frame.f_code.co_name cache_key = (name, frame.f_lineno) if cache_key in self._generated_function_cache: return self._generated_function_cache[cache_key] try: # Create new function from given code generated_function = cast(Callable, FunctionType(frame.f_code, globals=frame.f_globals, name=name)) except TypeError: # Unsuitable code for creating a function # Last resort: Return some function generated_function = self.unknown except Exception as exc: # Any other exception warnings.warn(f"Couldn't create function for {name} " f" ({type(exc).__name__}: {exc})") generated_function = self.unknown self._generated_function_cache[cache_key] = generated_function return generated_function ###Output _____no_output_____ ###Markdown The method `caller_function()` puts all of these together, simply looking up and returning the currently calling function – and creating one if it cannot be found. ###Code class StackInspector(StackInspector): def caller_function(self) -> Callable: """Return the calling function""" frame = self.caller_frame() name = frame.f_code.co_name func = self.search_func(name) if func: return func if not name.startswith('<'): warnings.warn(f"Couldn't find {name} in caller") return self.create_function(frame) def unknown(self) -> None: # Placeholder for unknown functions pass ###Output _____no_output_____ ###Markdown The method `is_internal_error()` allows us to differentiate whether some exception was raised by `StackInspector` (or a subclass) – or whether it was raised by the inspected code. ###Code import traceback class StackInspector(StackInspector): def is_internal_error(self, exc_tp: Type, exc_value: BaseException, exc_traceback: TracebackType) -> bool: """Return True if exception was raised from `StackInspector` or a subclass.""" if not exc_tp: return False for frame, lineno in traceback.walk_tb(exc_traceback): if self.our_frame(frame): return True return False ###Output _____no_output_____ ###Markdown Synopsis`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``. ###Code class StackInspectorDemo(StackInspector): def callee(self) -> None: func = self.caller_function() assert func.__name__ == 'test' print(func) def caller(self) -> None: self.callee() def test() -> None: demo = StackInspectorDemo() demo.caller() test() ###Output <function test at 0x7fe5cfcc07b8> ###Markdown Here are all methods defined in this chapter: ###Code # ignore from ClassDiagram import display_class_hierarchy, class_tree # ignore display_class_hierarchy([StackInspector], abstract_classes=[ StackInspector, ], public_methods=[ StackInspector.caller_frame, StackInspector.caller_function, StackInspector.caller_globals, StackInspector.caller_locals, StackInspector.caller_location, StackInspector.search_frame, StackInspector.search_func, StackInspector.is_internal_error, StackInspector.our_frame, ], project='debuggingbook') ###Output _____no_output_____ ###Markdown Inspecting Call StacksIn this book, for many purposes, we need to lookup a function's location, source code, or simply definition. The class `StackInspector` provides a number of convenience methods for this purpose. **Prerequisites*** This is an internal helper class.* Understanding how frames and local variables are represented in Python helps. SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.StackInspector import ```and then make use of the following features.`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``.```python>>> class StackInspectorDemo(StackInspector):>>> def callee(self) -> None:>>> func = self.caller_function()>>> assert func.__name__ == 'test'>>> print(func)>>> >>> def caller(self) -> None:>>> self.callee()>>> def test() -> None:>>> demo = StackInspectorDemo()>>> demo.caller()>>> test()```Here are all methods defined in this chapter:![](PICS/StackInspector-synopsis-1.svg) Inspecting Call Stacks`StackInspector` is a class that provides a number of utility functions to inspect a [call stack](https://en.wikipedia.org/wiki/Call_stack), notably to identify caller functions. When tracing or instrumenting functions, a common issue is to identify the currently active functions. A typical situation is depicted below, where `my_inspector()` currently traces a function called `function_under_test()`.| Function | Class | || --- | --- | --- || ... | `StackInspector` | || `caller_frame()` | `StackInspector` | invokes $\uparrow$ || `caller_function()` | `StackInspector` | invokes $\uparrow$ || `my_inspector()` | some inspector; a subclass of `StackInspector` | invokes $\uparrow$ || `function_under_test()` | (any) | is traced by $\uparrow$ || -/- | (any) | invokes $\uparrow$ |To determine the calling function, `my_inspector()` could check the current frame and retrieve the frame of the caller. However, this caller could be some tracing function again invoking `my_inspector()`. Therefore, `StackInspector` provides a method `caller_function()` that returns the first caller outside of a `StackInspector` class. This way, a subclass of `StackInspector` can define an arbitrary set of functions (and call stack); `caller_function()` will always return a function outside of the `StackInspector` subclass. ###Code import bookutils import inspect import warnings from types import FunctionType, FrameType, TracebackType # ignore from typing import cast, Dict, Any, Tuple, Callable, Optional, Type ###Output _____no_output_____ ###Markdown The method `caller_frame()` walks the current call stack from the current frame towards callers (using the `f_back` attribute of the current frame) and returns the first frame that is _not_ a method or function from the current `StackInspector` class or its subclass. To determine this, the method `our_frame()` determines whether the given execution frame refers to one of the methods of `StackInspector` or one of its subclasses. ###Code class StackInspector: """Provide functions to inspect the stack""" def caller_frame(self) -> FrameType: """Return the frame of the caller.""" # Walk up the call tree until we leave the current class frame = cast(FrameType, inspect.currentframe()) while self.our_frame(frame): frame = cast(FrameType, frame.f_back) return frame def our_frame(self, frame: FrameType) -> bool: """Return true if `frame` is in the current (inspecting) class.""" return isinstance(frame.f_locals.get('self'), self.__class__) ###Output _____no_output_____ ###Markdown When we access program state or execute functions, we do so in the caller's environment, not ours. The `caller_globals()` method acts as replacement for `globals()`, using `caller_frame()`. ###Code class StackInspector(StackInspector): def caller_globals(self) -> Dict[str, Any]: """Return the globals() environment of the caller.""" return self.caller_frame().f_globals def caller_locals(self) -> Dict[str, Any]: """Return the locals() environment of the caller.""" return self.caller_frame().f_locals ###Output _____no_output_____ ###Markdown The method `caller_location()` returns the caller's function and its location. It does a fair bit of magic to retrieve nested functions, by looking through global and local variables until a match is found. This may be simplified in the future. ###Code Location = Tuple[Callable, int] class StackInspector(StackInspector): def caller_location(self) -> Location: """Return the location (func, lineno) of the caller.""" return self.caller_function(), self.caller_frame().f_lineno ###Output _____no_output_____ ###Markdown The function `search_frame()` allows to search for an item named `name`, walking up the call stack. This is handy when trying to find local functions during tracing, for whom typically only the name is provided. ###Code class StackInspector(StackInspector): def search_frame(self, name: str, frame: Optional[FrameType] = None) -> \ Tuple[Optional[FrameType], Optional[Callable]]: """ Return a pair (`frame`, `item`) in which the function `name` is defined as `item`. """ if frame is None: frame = self.caller_frame() while frame: item = None if name in frame.f_globals: item = frame.f_globals[name] if name in frame.f_locals: item = frame.f_locals[name] if item and callable(item): return frame, item frame = cast(FrameType, frame.f_back) return None, None def search_func(self, name: str, frame: Optional[FrameType] = None) -> \ Optional[Callable]: """Search in callers for a definition of the function `name`""" frame, func = self.search_frame(name, frame) return func ###Output _____no_output_____ ###Markdown If we cannot find a function by name, we can create one, using `create_function()`. ###Code class StackInspector(StackInspector): # Avoid generating functions more than once _generated_function_cache: Dict[Tuple[str, int], Callable] = {} def create_function(self, frame: FrameType) -> Callable: """Create function for given frame""" name = frame.f_code.co_name cache_key = (name, frame.f_lineno) if cache_key in self._generated_function_cache: return self._generated_function_cache[cache_key] try: # Create new function from given code generated_function = cast(Callable, FunctionType(frame.f_code, globals=frame.f_globals, name=name)) except TypeError: # Unsuitable code for creating a function # Last resort: Return some function generated_function = self.unknown except Exception as exc: # Any other exception warnings.warn(f"Couldn't create function for {name} " f" ({type(exc).__name__}: {exc})") generated_function = self.unknown self._generated_function_cache[cache_key] = generated_function return generated_function ###Output _____no_output_____ ###Markdown The method `caller_function()` puts all of these together, simply looking up and returning the currently calling function – and creating one if it cannot be found. ###Code class StackInspector(StackInspector): def caller_function(self) -> Callable: """Return the calling function""" frame = self.caller_frame() name = frame.f_code.co_name func = self.search_func(name) if func: return func if not name.startswith('<'): warnings.warn(f"Couldn't find {name} in caller") return self.create_function(frame) def unknown(self) -> None: # Placeholder for unknown functions pass ###Output _____no_output_____ ###Markdown The method `is_internal_error()` allows us to differentiate whether some exception was raised by `StackInspector` (or a subclass) – or whether it was raised by the inspected code. ###Code import traceback class StackInspector(StackInspector): def is_internal_error(self, exc_tp: Type, exc_value: BaseException, exc_traceback: TracebackType) -> bool: """Return True if exception was raised from `StackInspector` or a subclass.""" if not exc_tp: return False for frame, lineno in traceback.walk_tb(exc_traceback): if self.our_frame(frame): return True return False ###Output _____no_output_____ ###Markdown Synopsis`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``. ###Code class StackInspectorDemo(StackInspector): def callee(self) -> None: func = self.caller_function() assert func.__name__ == 'test' print(func) def caller(self) -> None: self.callee() def test() -> None: demo = StackInspectorDemo() demo.caller() test() ###Output <function test at 0x1032f0af0> ###Markdown Here are all methods defined in this chapter: ###Code # ignore from ClassDiagram import display_class_hierarchy, class_tree # ignore display_class_hierarchy([StackInspector], abstract_classes=[ StackInspector, ], public_methods=[ StackInspector.caller_frame, StackInspector.caller_function, StackInspector.caller_globals, StackInspector.caller_locals, StackInspector.caller_location, StackInspector.search_frame, StackInspector.search_func, StackInspector.is_internal_error, StackInspector.our_frame, ], project='debuggingbook') ###Output _____no_output_____ ###Markdown Inspecting Call StacksIn this book, for many purposes, we need to lookup a function's location, source code, or simply definition. The class `StackInspector` provides a number of convenience methods for this purpose. **Prerequisites*** This is an internal helper class.* Understanding how frames and local variables are represented in Python helps. SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.StackInspector import ```and then make use of the following features.`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``.```python>>> class StackInspectorDemo(StackInspector):>>> def callee(self) -> None:>>> func = self.caller_function()>>> assert func.__name__ == 'test'>>> print(func)>>> >>> def caller(self) -> None:>>> self.callee()>>> def test() -> None:>>> demo = StackInspectorDemo()>>> demo.caller()>>> test()```Here are all methods defined in this chapter:![](PICS/StackInspector-synopsis-1.svg) Inspecting Call Stacks`StackInspector` is a class that provides a number of utility functions to inspect a [call stack](https://en.wikipedia.org/wiki/Call_stack), notably to identify caller functions. When tracing or instrumenting functions, a common issue is to identify the currently active functions. A typical situation is depicted below, where `my_inspector()` currently traces a function called `function_under_test()`.| Function | Class | || --- | --- | --- || ... | `StackInspector` | || `caller_frame()` | `StackInspector` | invokes $\uparrow$ || `caller_function()` | `StackInspector` | invokes $\uparrow$ || `my_inspector()` | some inspector; a subclass of `StackInspector` | invokes $\uparrow$ || `function_under_test()` | (any) | is traced by $\uparrow$ || -/- | (any) | invokes $\uparrow$ |To determine the calling function, `my_inspector()` could check the current frame and retrieve the frame of the caller. However, this caller could be some tracing function again invoking `my_inspector()`. Therefore, `StackInspector` provides a method `caller_function()` that returns the first caller outside of a `StackInspector` class. This way, a subclass of `StackInspector` can define an arbitrary set of functions (and call stack); `caller_function()` will always return a function outside of the `StackInspector` subclass. ###Code import bookutils import inspect import warnings from types import FunctionType, FrameType, TracebackType # ignore from typing import cast, Dict, Any, Tuple, Callable, Optional, Type ###Output _____no_output_____ ###Markdown The method `caller_frame()` walks the current call stack from the current frame towards callers (using the `f_back` attribute of the current frame) and returns the first frame that is _not_ a method or function from the current `StackInspector` class or its subclass. To determine this, the method `our_frame()` determines whether the given execution frame refers to one of the methods of `StackInspector` or one of its subclasses. ###Code class StackInspector: """Provide functions to inspect the stack""" def caller_frame(self) -> FrameType: """Return the frame of the caller.""" # Walk up the call tree until we leave the current class frame = cast(FrameType, inspect.currentframe()) while self.our_frame(frame): frame = cast(FrameType, frame.f_back) return frame def our_frame(self, frame: FrameType) -> bool: """Return true if `frame` is in the current (inspecting) class.""" return isinstance(frame.f_locals.get('self'), self.__class__) ###Output _____no_output_____ ###Markdown When we access program state or execute functions, we do so in the caller's environment, not ours. The `caller_globals()` method acts as replacement for `globals()`, using `caller_frame()`. ###Code class StackInspector(StackInspector): def caller_globals(self) -> Dict[str, Any]: """Return the globals() environment of the caller.""" return self.caller_frame().f_globals def caller_locals(self) -> Dict[str, Any]: """Return the locals() environment of the caller.""" return self.caller_frame().f_locals ###Output _____no_output_____ ###Markdown The method `caller_location()` returns the caller's function and its location. It does a fair bit of magic to retrieve nested functions, by looking through global and local variables until a match is found. This may be simplified in the future. ###Code Location = Tuple[Callable, int] class StackInspector(StackInspector): def caller_location(self) -> Location: """Return the location (func, lineno) of the caller.""" return self.caller_function(), self.caller_frame().f_lineno ###Output _____no_output_____ ###Markdown The function `search_frame()` allows to search for an item named `name`, walking up the call stack. This is handy when trying to find local functions during tracing, for whom typically only the name is provided. ###Code class StackInspector(StackInspector): def search_frame(self, name: str, frame: Optional[FrameType] = None) -> \ Tuple[Optional[FrameType], Optional[Callable]]: """ Return a pair (`frame`, `item`) in which the function `name` is defined as `item`. """ if frame is None: frame = self.caller_frame() while frame: item = None if name in frame.f_globals: item = frame.f_globals[name] if name in frame.f_locals: item = frame.f_locals[name] if item and callable(item): return frame, item frame = cast(FrameType, frame.f_back) return None, None def search_func(self, name: str, frame: Optional[FrameType] = None) -> \ Optional[Callable]: """Search in callers for a definition of the function `name`""" frame, func = self.search_frame(name, frame) return func ###Output _____no_output_____ ###Markdown If we cannot find a function by name, we can create one, using `create_function()`. ###Code class StackInspector(StackInspector): # Avoid generating functions more than once _generated_function_cache: Dict[Tuple[str, int], Callable] = {} def create_function(self, frame: FrameType) -> Callable: """Create function for given frame""" name = frame.f_code.co_name cache_key = (name, frame.f_lineno) if cache_key in self._generated_function_cache: return self._generated_function_cache[cache_key] try: # Create new function from given code generated_function = cast(Callable, FunctionType(frame.f_code, globals=frame.f_globals, name=name)) except TypeError: # Unsuitable code for creating a function # Last resort: Return some function generated_function = self.unknown except Exception as exc: # Any other exception warnings.warn(f"Couldn't create function for {name} " f" ({type(exc).__name__}: {exc})") generated_function = self.unknown self._generated_function_cache[cache_key] = generated_function return generated_function ###Output _____no_output_____ ###Markdown The method `caller_function()` puts all of these together, simply looking up and returning the currently calling function – and creating one if it cannot be found. ###Code class StackInspector(StackInspector): def caller_function(self) -> Callable: """Return the calling function""" frame = self.caller_frame() name = frame.f_code.co_name func = self.search_func(name) if func: return func if not name.startswith('<'): warnings.warn(f"Couldn't find {name} in caller") return self.create_function(frame) def unknown(self) -> None: # Placeholder for unknown functions pass ###Output _____no_output_____ ###Markdown The method `is_internal_error()` allows us to differentiate whether some exception was raised by `StackInspector` (or a subclass) – or whether it was raised by the inspected code. ###Code import traceback class StackInspector(StackInspector): def is_internal_error(self, exc_tp: Type, exc_value: BaseException, exc_traceback: TracebackType) -> bool: """Return True if exception was raised from `StackInspector` or a subclass.""" if not exc_tp: return False for frame, lineno in traceback.walk_tb(exc_traceback): if self.our_frame(frame): return True return False ###Output _____no_output_____ ###Markdown Synopsis`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``. ###Code class StackInspectorDemo(StackInspector): def callee(self) -> None: func = self.caller_function() assert func.__name__ == 'test' print(func) def caller(self) -> None: self.callee() def test() -> None: demo = StackInspectorDemo() demo.caller() test() ###Output <function test at 0x10822e9d0> ###Markdown Here are all methods defined in this chapter: ###Code # ignore from ClassDiagram import display_class_hierarchy, class_tree # ignore display_class_hierarchy([StackInspector], abstract_classes=[ StackInspector, ], public_methods=[ StackInspector.caller_frame, StackInspector.caller_function, StackInspector.caller_globals, StackInspector.caller_locals, StackInspector.caller_location, StackInspector.search_frame, StackInspector.search_func, StackInspector.is_internal_error, StackInspector.our_frame, ], project='debuggingbook') ###Output _____no_output_____ ###Markdown Inspecting Call StacksIn this book, for many purposes, we need to lookup a function's location, source code, or simply definition. The class `StackInspector` provides a number of convenience methods for this purpose. **Prerequisites*** This is an internal helper class.* Understanding how frames and local variables are represented in Python helps. SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.StackInspector import ```and then make use of the following features.`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``.```python>>> class StackInspectorDemo(StackInspector):>>> def callee(self) -> None:>>> func = self.caller_function()>>> assert func.__name__ == 'test'>>> print(func)>>> >>> def caller(self) -> None:>>> self.callee()>>> def test() -> None:>>> demo = StackInspectorDemo()>>> demo.caller()>>> test()```Here are all methods defined in this chapter:![](PICS/StackInspector-synopsis-1.svg) Inspecting Call Stacks`StackInspector` is a class that provides a number of utility functions to inspect a [call stack](https://en.wikipedia.org/wiki/Call_stack), notably to identify caller functions. When tracing or instrumenting functions, a common issue is to identify the currently active functions. A typical situation is depicted below, where `my_inspector()` currently traces a function called `function_under_test()`.| Function | Class | || --- | --- | --- || ... | `StackInspector` | || `caller_frame()` | `StackInspector` | invokes $\uparrow$ || `caller_function()` | `StackInspector` | invokes $\uparrow$ || `my_inspector()` | some inspector; a subclass of `StackInspector` | invokes $\uparrow$ || `function_under_test()` | (any) | is traced by $\uparrow$ || -/- | (any) | invokes $\uparrow$ |To determine the calling function, `my_inspector()` could check the current frame and retrieve the frame of the caller. However, this caller could be some tracing function again invoking `my_inspector()`. Therefore, `StackInspector` provides a method `caller_function()` that returns the first caller outside of a `StackInspector` class. This way, a subclass of `StackInspector` can define an arbitrary set of functions (and call stack); `caller_function()` will always return a function outside of the `StackInspector` subclass. ###Code import bookutils import inspect import warnings from types import FunctionType, FrameType, TracebackType # ignore from typing import cast, Dict, Any, Tuple, Callable, Optional, Type ###Output _____no_output_____ ###Markdown The method `caller_frame()` walks the current call stack from the current frame towards callers (using the `f_back` attribute of the current frame) and returns the first frame that is _not_ a method or function from the current `StackInspector` class or its subclass. To determine this, the method `our_frame()` determines whether the given execution frame refers to one of the methods of `StackInspector` or one of its subclasses. ###Code class StackInspector: """Provide functions to inspect the stack""" def caller_frame(self) -> FrameType: """Return the frame of the caller.""" # Walk up the call tree until we leave the current class frame = cast(FrameType, inspect.currentframe()) while self.our_frame(frame): frame = cast(FrameType, frame.f_back) return frame def our_frame(self, frame: FrameType) -> bool: """Return true if `frame` is in the current (inspecting) class.""" return isinstance(frame.f_locals.get('self'), self.__class__) ###Output _____no_output_____ ###Markdown When we access program state or execute functions, we do so in the caller's environment, not ours. The `caller_globals()` method acts as replacement for `globals()`, using `caller_frame()`. ###Code class StackInspector(StackInspector): def caller_globals(self) -> Dict[str, Any]: """Return the globals() environment of the caller.""" return self.caller_frame().f_globals def caller_locals(self) -> Dict[str, Any]: """Return the locals() environment of the caller.""" return self.caller_frame().f_locals ###Output _____no_output_____ ###Markdown The method `caller_location()` returns the caller's function and its location. It does a fair bit of magic to retrieve nested functions, by looking through global and local variables until a match is found. This may be simplified in the future. ###Code Location = Tuple[Callable, int] class StackInspector(StackInspector): def caller_location(self) -> Location: """Return the location (func, lineno) of the caller.""" return self.caller_function(), self.caller_frame().f_lineno ###Output _____no_output_____ ###Markdown The function `search_frame()` allows to search for an item named `name`, walking up the call stack. This is handy when trying to find local functions during tracing, for whom typically only the name is provided. ###Code class StackInspector(StackInspector): def search_frame(self, name: str, frame: Optional[FrameType] = None) -> \ Tuple[Optional[FrameType], Optional[Callable]]: """ Return a pair (`frame`, `item`) in which the function `name` is defined as `item`. """ if frame is None: frame = self.caller_frame() while frame: item = None if name in frame.f_globals: item = frame.f_globals[name] if name in frame.f_locals: item = frame.f_locals[name] if item and callable(item): return frame, item frame = cast(FrameType, frame.f_back) return None, None def search_func(self, name: str, frame: Optional[FrameType] = None) -> \ Optional[Callable]: """Search in callers for a definition of the function `name`""" frame, func = self.search_frame(name, frame) return func ###Output _____no_output_____ ###Markdown If we cannot find a function by name, we can create one, using `create_function()`. ###Code class StackInspector(StackInspector): # Avoid generating functions more than once _generated_function_cache: Dict[Tuple[str, int], Callable] = {} def create_function(self, frame: FrameType) -> Callable: """Create function for given frame""" name = frame.f_code.co_name cache_key = (name, frame.f_lineno) if cache_key in self._generated_function_cache: return self._generated_function_cache[cache_key] try: # Create new function from given code generated_function = cast(Callable, FunctionType(frame.f_code, globals=frame.f_globals, name=name)) except TypeError: # Unsuitable code for creating a function # Last resort: Return some function generated_function = self.unknown except Exception as exc: # Any other exception warnings.warn(f"Couldn't create function for {name} " f" ({type(exc).__name__}: {exc})") generated_function = self.unknown self._generated_function_cache[cache_key] = generated_function return generated_function ###Output _____no_output_____ ###Markdown The method `caller_function()` puts all of these together, simply looking up and returning the currently calling function – and creating one if it cannot be found. ###Code class StackInspector(StackInspector): def caller_function(self) -> Callable: """Return the calling function""" frame = self.caller_frame() name = frame.f_code.co_name func = self.search_func(name) if func: return func if not name.startswith('<'): warnings.warn(f"Couldn't find {name} in caller") return self.create_function(frame) def unknown(self) -> None: # Placeholder for unknown functions pass ###Output _____no_output_____ ###Markdown The method `is_internal_error()` allows us to differentiate whether some exception was raised by `StackInspector` (or a subclass) – or whether it was raised by the inspected code. ###Code import traceback class StackInspector(StackInspector): def is_internal_error(self, exc_tp: Type, exc_value: BaseException, exc_traceback: TracebackType) -> bool: """Return True if exception was raised from `StackInspector` or a subclass.""" if not exc_tp: return False for frame, lineno in traceback.walk_tb(exc_traceback): if self.our_frame(frame): return True return False ###Output _____no_output_____ ###Markdown Synopsis`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``. ###Code class StackInspectorDemo(StackInspector): def callee(self) -> None: func = self.caller_function() assert func.__name__ == 'test' print(func) def caller(self) -> None: self.callee() def test() -> None: demo = StackInspectorDemo() demo.caller() test() ###Output <function test at 0x7fbfe64b27b8> ###Markdown Here are all methods defined in this chapter: ###Code # ignore from ClassDiagram import display_class_hierarchy, class_tree # ignore display_class_hierarchy([StackInspector], abstract_classes=[ StackInspector, ], public_methods=[ StackInspector.caller_frame, StackInspector.caller_function, StackInspector.caller_globals, StackInspector.caller_locals, StackInspector.caller_location, StackInspector.search_frame, StackInspector.search_func, StackInspector.is_internal_error, StackInspector.our_frame, ], project='debuggingbook') ###Output _____no_output_____ ###Markdown Inspecting Call StacksIn this book, for many purposes, we need to lookup a function's location, source code, or simply definition. The class `StackInspector` provides a number of convenience methods for this purpose. **Prerequisites*** This is an internal helper class.* Understanding how frames and local variables are represented in Python helps. SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.StackInspector import ```and then make use of the following features.`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``.```python>>> class StackInspectorDemo(StackInspector):>>> def callee(self) -> None:>>> func = self.caller_function()>>> assert func.__name__ == 'test'>>> print(func)>>> >>> def caller(self) -> None:>>> self.callee()>>> def test() -> None:>>> demo = StackInspectorDemo()>>> demo.caller()>>> test()```Here are all methods defined in this chapter:![](PICS/StackInspector-synopsis-1.svg) Inspecting Call Stacks`StackInspector` is a class that provides a number of utility functions to inspect a [call stack](https://en.wikipedia.org/wiki/Call_stack), notably to identify caller functions. When tracing or instrumenting functions, a common issue is to identify the currently active functions. A typical situation is depicted below, where `my_inspector()` currently traces a function called `function_under_test()`.| Function | Class | || --- | --- | --- || ... | `StackInspector` | || `caller_frame()` | `StackInspector` | invokes $\uparrow$ || `caller_function()` | `StackInspector` | invokes $\uparrow$ || `my_inspector()` | some inspector; a subclass of `StackInspector` | invokes $\uparrow$ || `function_under_test()` | (any) | is traced by $\uparrow$ || -/- | (any) | invokes $\uparrow$ |To determine the calling function, `my_inspector()` could check the current frame and retrieve the frame of the caller. However, this caller could be some tracing function again invoking `my_inspector()`. Therefore, `StackInspector` provides a method `caller_function()` that returns the first caller outside of a `StackInspector` class. This way, a subclass of `StackInspector` can define an arbitrary set of functions (and call stack); `caller_function()` will always return a function outside of the `StackInspector` subclass. ###Code import bookutils import inspect import warnings from types import FunctionType, FrameType, TracebackType from typing import cast, Dict, Any, Tuple, Callable, Optional, Type ###Output _____no_output_____ ###Markdown The method `caller_frame()` walks the current call stack from the current frame towards callers (using the `f_back` attribute of the current frame) and returns the first frame that is _not_ a method or function from the current `StackInspector` class or its subclass. To determine this, the method `our_frame()` determines whether the given execution frame refers to one of the methods of `StackInspector` or one of its subclasses. ###Code class StackInspector: """Provide functions to inspect the stack""" def caller_frame(self) -> FrameType: """Return the frame of the caller.""" # Walk up the call tree until we leave the current class frame = cast(FrameType, inspect.currentframe()) while self.our_frame(frame): frame = cast(FrameType, frame.f_back) return frame def our_frame(self, frame: FrameType) -> bool: """Return true if `frame` is in the current (inspecting) class.""" return isinstance(frame.f_locals.get('self'), self.__class__) ###Output _____no_output_____ ###Markdown When we access program state or execute functions, we do so in the caller's environment, not ours. The `caller_globals()` method acts as replacement for `globals()`, using `caller_frame()`. ###Code class StackInspector(StackInspector): def caller_globals(self) -> Dict[str, Any]: """Return the globals() environment of the caller.""" return self.caller_frame().f_globals def caller_locals(self) -> Dict[str, Any]: """Return the locals() environment of the caller.""" return self.caller_frame().f_locals ###Output _____no_output_____ ###Markdown The method `caller_location()` returns the caller's function and its location. It does a fair bit of magic to retrieve nested functions, by looking through global and local variables until a match is found. This may be simplified in the future. ###Code Location = Tuple[Callable, int] class StackInspector(StackInspector): def caller_location(self) -> Location: """Return the location (func, lineno) of the caller.""" return self.caller_function(), self.caller_frame().f_lineno ###Output _____no_output_____ ###Markdown The function `search_frame()` allows to search for an item named `name`, walking up the call stack. This is handy when trying to find local functions during tracing, for whom typically only the name is provided. ###Code class StackInspector(StackInspector): def search_frame(self, name: str, frame: Optional[FrameType] = None) -> \ Tuple[Optional[FrameType], Optional[Callable]]: """ Return a pair (`frame`, `item`) in which the function `name` is defined as `item`. """ if frame is None: frame = self.caller_frame() while frame: item = None if name in frame.f_globals: item = frame.f_globals[name] if name in frame.f_locals: item = frame.f_locals[name] if item and callable(item): return frame, item frame = cast(FrameType, frame.f_back) return None, None def search_func(self, name: str, frame: Optional[FrameType] = None) -> \ Optional[Callable]: """Search in callers for a definition of the function `name`""" frame, func = self.search_frame(name, frame) return func ###Output _____no_output_____ ###Markdown If we cannot find a function by name, we can create one, using `create_function()`. ###Code class StackInspector(StackInspector): # Avoid generating functions more than once _generated_function_cache: Dict[Tuple[str, int], Callable] = {} def create_function(self, frame: FrameType) -> Callable: """Create function for given frame""" name = frame.f_code.co_name cache_key = (name, frame.f_lineno) if cache_key in self._generated_function_cache: return self._generated_function_cache[cache_key] try: # Create new function from given code generated_function = cast(Callable, FunctionType(frame.f_code, globals=frame.f_globals, name=name)) except TypeError: # Unsuitable code for creating a function # Last resort: Return some function generated_function = self.unknown except Exception as exc: # Any other exception warnings.warn(f"Couldn't create function for {name} " f" ({type(exc).__name__}: {exc})") generated_function = self.unknown self._generated_function_cache[cache_key] = generated_function return generated_function ###Output _____no_output_____ ###Markdown The method `caller_function()` puts all of these together, simply looking up and returning the currently calling function – and creating one if it cannot be found. ###Code class StackInspector(StackInspector): def caller_function(self) -> Callable: """Return the calling function""" frame = self.caller_frame() name = frame.f_code.co_name func = self.search_func(name) if func: return func if not name.startswith('<'): warnings.warn(f"Couldn't find {name} in caller") return self.create_function(frame) def unknown(self) -> None: # Placeholder for unknown functions pass ###Output _____no_output_____ ###Markdown The method `is_internal_error()` allows us to differentiate whether some exception was raised by `StackInspector` (or a subclass) – or whether it was raised by the inspected code. ###Code import traceback class StackInspector(StackInspector): def is_internal_error(self, exc_tp: Type, exc_value: BaseException, exc_traceback: TracebackType) -> bool: """Return True if exception was raised from `StackInspector` or a subclass.""" if not exc_tp: return False for frame, lineno in traceback.walk_tb(exc_traceback): if self.our_frame(frame): return True return False ###Output _____no_output_____ ###Markdown Synopsis`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``. ###Code class StackInspectorDemo(StackInspector): def callee(self) -> None: func = self.caller_function() assert func.__name__ == 'test' print(func) def caller(self) -> None: self.callee() def test() -> None: demo = StackInspectorDemo() demo.caller() test() ###Output <function test at 0x7fca50cba6a8> ###Markdown Here are all methods defined in this chapter: ###Code # ignore from ClassDiagram import display_class_hierarchy, class_tree # ignore display_class_hierarchy([StackInspector], abstract_classes=[ StackInspector, ], public_methods=[ StackInspector.caller_frame, StackInspector.caller_function, StackInspector.caller_globals, StackInspector.caller_locals, StackInspector.caller_location, StackInspector.search_frame, StackInspector.search_func, StackInspector.is_internal_error, StackInspector.our_frame, ], project='debuggingbook') ###Output _____no_output_____
opencharts/medium-wind-10m.ipynb
###Markdown 10m wind and mean sea level pressure This notebook will provide you guidance how to explore and plot ECMWF open dataset to produce the map from the ECMWF open charts web product. The original product can be found on this link: https://apps.ecmwf.int/webapps/opencharts/products/medium-wind-10m Retrieve DataThis product takes in input 3 parameters :* [Mean sea level pressure](https://apps.ecmwf.int/codes/grib/param-db/?id=151)* [10 metre U wind component](https://apps.ecmwf.int/codes/grib/param-db/?id=165)* [10 metre V wind component](https://apps.ecmwf.int/codes/grib/param-db/?id=166) In this example, we will use: - [**ecmwf.opendata**](https://github.com/ecmwf/ecmwf-opendata) Client to download the data- [**ecmwf.data**](https://github.com/ecmwf/ecmwf-data) library to read and process the data - [**magpye**](https://magpye.readthedocs.io) to plot the result First we need to install them in the current Jupyter kernel: Note: If you are running the notebook on MyBinder or already have the libraries installed, go directly to importing the libraries.Note: If you don't have these libraries installed, click on three dots below, uncomment the code and run the next cell. ###Code #!pip install ecmwf-data ecmwf-opendata magpye import ecmwf.data as ecdata from magpye import GeoMap from ecmwf.opendata import Client client = Client("ecmwf", beta=True) parameters = ['msl', '10u', '10v'] filename = 'medium-wind-10m.grib' filename client.retrieve( date=0, time=0, step=12, stream="oper", type="fc", levtype="sfc", param=parameters, target=filename ) ###Output ###Markdown Reading and processing the dataNow we can use **ecmwf.data** to read the files. ###Code data = ecdata.read(filename) ###Output _____no_output_____ ###Markdown The **describe()** function will give us the overview of the dataset. ###Code data.describe() ###Output _____no_output_____ ###Markdown And an overview of one parameter, where we can see more information, such as units or type of level. ###Code data.describe('msl') msl = data.select(shortName = 'msl') msl.describe() ###Output _____no_output_____ ###Markdown Mean sea level pressure data has unites Pa, but we want to plot it in hPa, therefore we need to convert it. ###Code msl /= 100 ###Output _____no_output_____ ###Markdown We can calculate wind speed using u and v component: ###Code u = data.select(shortName='10u') v = data.select(shortName='10v') speed = ecdata.speed(u,v) speed.describe() ###Output _____no_output_____ ###Markdown And finally, we can plot the data on the map. ###Code fig = GeoMap(area_name = 'europe') fig.coastlines(land_colour = "cream",resolution = "medium") fig.contour_shaded(speed, style = "speed_green_low") fig.contour_lines(msl, style = "black_i5") fig.arrows(u=u,v=v, wind_style="arrows", density=2, colour='black') fig.coastlines(resolution="medium") fig.gridlines() fig.title(["10m wind and mean sea level pressure", "START TIME: <grib_info key='base-date' format='%a %d %B %Y %H' where='shortName=msl'/>", "VALID TIME: <grib_info key='valid-date' format='%a %d %B %Y %H' where='shortName=msl'/>, STEP: <grib_info key='step' where='shortName=msl'/>"]) fig.legend() fig.footer("© European Centre for Medium-Range Weather Forecasts (ECMWF) Source: www.ecmwf.int Licence: CC-BY-4.0 and ECMWF Terms of Use (https://apps.ecmwf.int/datasets/licences/general/)", logo='ecmwf') fig.show() ###Output _____no_output_____
notebooks/Major Leagues.ipynb
###Markdown Major Leagues __Problem Statement__: Predict the scores on the basis of other features in dataset__Data Source__: https://github.com/fivethirtyeight/nfl-elo-game/tree/master/dataData Set contains Historical NFL scores back to 1920 in nfl_games.csv, with FiveThirtyEight's Elo win probabilities for each game. Based on the data we have home team name, away team name, scores by both team in each game, elo predictions for both teams, date and the season.__References__:https://www.kaggle.com/pavanraj159/european-football-data-analysishttps://towardsdatascience.com/simple-and-multiple-linear-regression-in-python-c928425168f9 https://www.kaggle.com/angps95/fifa-world-cup-2018-prediction ###Code %matplotlib inline import pandas as pd from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt from sklearn.metrics import accuracy_score , confusion_matrix from sklearn.linear_model import LogisticRegression import seaborn as sns from sklearn.model_selection import KFold from sklearn.grid_search import GridSearchCV from sklearn.model_selection import cross_val_score import numpy as np import warnings warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown Data Extraction ###Code games= pd.read_csv('../data/external/nfl_games.csv') list(games) games.head() #Check if there are any null entries games.isnull().sum() #Total number of unique entries for each column games.nunique() #Relation between each feature figure, ax = plt.subplots(figsize=(8,8)) cor = games.corr() sns.heatmap(cor, square = True, linewidths=.5, ax=ax) ###Output _____no_output_____ ###Markdown Feature Engineering ###Code # Create new column for storing the match outcome #Return 2 if home team wins #Return 1 if match ties #Return 0 if away team wins def label(data): if data["score1"] > data["score2"]: return 2 elif data["score2"] > data["score1"]: return 0 elif data["score2"] == data["score1"]: return 1 games["winner"] = games.apply(lambda games:label(games),axis=1) plt.figure(figsize=(8,8)) labels = 'HOME TEAM', 'AWAY TEAM', 'DRAW' colors = ['yellowgreen', 'lightcoral', 'lightskyblue'] games["winner"].value_counts().plot.pie(autopct = "%1.0f%%",labels=labels, colors =colors) my_circ = plt.Circle((0,0),.7,color = "white") plt.gca().add_artist(my_circ) plt.title("Proportion of Game Outcomes") plt.show() games.head() #Create another data frame object with the features used for predicting the scores games_filtered = games[['team1', 'team2','elo1','elo2', 'winner']].copy() games_filtered.head() #Use one hot encoding to convert symbolic data to numerical data final=pd.get_dummies(games_filtered,prefix=['home_team','away_team'],columns=['team1','team2']) final.head() ###Output _____no_output_____ ###Markdown Regression Modeling Predicting using both score and elo rating features ###Code #Define features X=final.drop(['winner'], axis=1) #Define target y=final['winner'] y=y.astype('int') x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 12) model = LogisticRegression() model.fit(x_train, y_train) prediction = dict() prediction['Logistic'] = model.predict(x_test) print('Accuracy: ',accuracy_score(y_test, prediction['Logistic'])) conf_mat_logist = confusion_matrix(y_test, prediction['Logistic']) print('Logist \r', conf_mat_logist) #Print first 20 predictions print(prediction['Logistic'][0:20]) ###Output [0 0 2 2 2 2 2 0 2 2 2 2 2 2 2 0 2 2 0 2] ###Markdown Predicting using just score feature ###Code #Define features X=final.drop(['winner','elo1','elo2'], axis=1) #Define target y=final['winner'] y=y.astype('int') x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 12) model = LogisticRegression() y_pred = model.fit(x_train, y_train) prediction = dict() prediction['Logistic'] = model.predict(x_test) print('Accuracy: ',accuracy_score(y_test, prediction['Logistic'])) conf_mat_logist = confusion_matrix(y_test, prediction['Logistic']) print('Logist \r', conf_mat_logist) #Print first 20 predictions print(prediction['Logistic'][0:20]) ###Output [2 2 2 2 2 2 2 2 2 2 0 0 2 2 2 2 2 0 2 2] ###Markdown In case of considering both score and elo prediction as the feature, I have got 64% accuracy on the test data. Also, first 20 predictions show the more number of the home teams winning as compared to away teams. While, in case of considering just score as the feature, the accuracy has reduce to 57%. Also, first 20 predictions show even more percentage of home teams winning over the away teams. Predict actual scores for home and away team ###Code games['teamA'] = pd.factorize(games.team1)[0] + 1 games['teamB'] = pd.factorize(games.team2)[0] + 1 games.head() games = games.drop(columns=['date','season','team1','team2','result1','neutral','playoff']) final = games.copy() #Define features x=final.drop(['score1','score2'], axis=1) #Define target y_home=final['score1'] y_away=final['score2'] x_home_train,x_home_test,y_home_train,y_home_test=train_test_split(x,y_home,test_size=0.2,random_state=0) x_away_train,x_away_test,y_away_train,y_away_test=train_test_split(x,y_away,test_size=0.2,random_state=0) k_fold = KFold(n_splits=5, shuffle=True, random_state=0) param_grid = dict(C=(0.0001,0.001,0.005,0.01,0.1,0.5,1)) homelog_reg1 = GridSearchCV(LogisticRegression(penalty="l1"),param_grid=param_grid,scoring="f1_macro") homelog_reg1.fit(x_home_train,y_home_train) print(homelog_reg1.best_params_) ###Output {'C': 0.5} ###Markdown The columns in the confusion matrix below are representing the predicted game score for each home team, and index is representing the actual game score ###Code cm=confusion_matrix(y_home_train,homelog_reg1.predict(x_home_train)) cm=pd.DataFrame(cm) cm.T awaylog_reg1 = GridSearchCV(LogisticRegression(penalty="l1"),param_grid=param_grid,scoring="f1_macro") awaylog_reg1.fit(x_away_train,y_away_train) print(awaylog_reg1.best_params_) ###Output {'C': 0.5} ###Markdown The columns in the confusion matrix below are representing the predicted game score for each away team, and index is representing the actual game score ###Code cm=confusion_matrix(y_home_train,awaylog_reg1.predict(x_home_train)) cm=pd.DataFrame(cm) cm.T ###Output _____no_output_____
notebooks/QUBO_Examples_MaximumCut.ipynb
###Markdown Prepared by Sabah Ud Din Ahmad Examples for QUBO Formulation In the previous section, we learnt about the objective functions and QUBO problems. Now, lets apply the QUBO formulation to some of the combinatorial optimization problems. Maximum Cut Given a graph, the problem requires splitting the vertices/nodes into two disjoint groups so that there are as many edges as possible between the groups. The partition of two adjacent vertices into disjoint sets is called a cut. The goal of this problem is to find a cut in such a way that the cut covers the maximum number of edges.Since we have to partition the vertices in the graph, we will assign a binary variable for each vertex, i.e. variable $x_i$:$$x_{i}=\left\{\begin{array}{ll} 0, & \text{if vertex i is a part of Group 1} \\ 1, & \text{if vertex i is a part of Group 2} \\\end{array}\right.$$**The objective function for optimization is maximizing the number of cut edges.** For a particular graph, let's consider a single edge. We only want to count an edge if the endpoints/vertices are in different groups. Let this be denoted by a function i.e. edge_count$(x_i,x_j)$ which depends on the values of $x_i$ & $x_j$. If vertices are in different groups, edge_count$(x_i,x_j)$ gives a 1; otherwise 0.|$x_i$ |$x_j$|edge_count$(x_i,x_j)$|Comment||:-----|:----:|:----:|:----:||0 |0 |0 |Vertices are in the same group||0 |1 |1 |Vertices are in different groups||1 |0 |1 |Vertices are in different groups||1 |1 |0 |Vertices are in the same group|From this table, we observe that we can use the expression $x_i+x_j-2x_ix_j$ to calculate the edge_count in the table. Task 1Verify that the expression $x_i+x_j-2x_ix_j$ gives correct values of edge_count in the table. [Click here for solution](QUBO_Examples_MaximumCut_Solutions.ipynbtask1)*** Since our objective function is maximizing the total number of cut edges, for our entire graph, our objective function is:$$\max \sum_{(i,j) \in E} (x_i+x_j-2x_ix_j)$$where the sum is over edge set E in the graph.Since QUBO formulation minimizes an objective function, we must convert this maximization problem to a minimization problem by multiplying the expression by -1. Our final QUBO expression is the following:$$\min \sum_{(i,j) \in E} (-x_i-x_j+2x_ix_j)$$ Example 1Let's assume we have a simple network of 5 vertices and 6 edges. **QUBO Algebraic Expression**Using the QUBO expression $\min \sum_{(i,j) \in E} (-x_i-x_j+2x_ix_j)$ and summing over the edges,$\min y = (-x_1-x_2+2x_1x_2)+(-x_1-x_3+2x_1x_3)+(-x_2-x_4+2x_2x_4)+(-x_3-x_4+2x_3x_4)+(-x_3-x_5+2x_3x_5)+(-x_4-x_5+2x_4x_5)$$\min y = -2x_1-2x_2-3x_3-3x_4-2x_5+2x_1x_2+2x_1x_3+2x_2x_4+2x_3x_4+2x_3x_5+2x_4x_5$**QUBO Matrix Formulation**Since QUBO doesn't have squared binary variables as its 0 and 1 values remain unchanged when squared, so we can replace any term $x_i^2$ with $x_i$, and vice versa (this doesnt apply to products $x_i x_j$).$\min y = -2x_1^2-2x_2^2-3x_3^2-3x_4^2-2x_5^2+2x_1x_2+2x_1x_3+2x_2x_4+2x_3x_4+2x_3x_5+2x_4x_5$This takes the desired form:$$\min_{x \in {0,1}^n} x^T Q x$$where $x$ is:$$x = \begin{pmatrix}x_1 \\x_2 \\x_3 \\x_4 \\x_5 \end{pmatrix}$$and the upper diagonal matrix Q is:$$Q = \begin{pmatrix}-2 & 2 & 2 & 0 & 0\\0 & -2 & 0 & 2 & 0\\0 & 0 & -3 & 2 & 2\\0 & 0 & 0 & -3 & 2\\0 & 0 & 0 & 0 & -2\end{pmatrix}$$If we dont replace the binary variables, this can be visualized this way that the linear terms determine the elements on the main diagonal of Q and the quadratic terms determine the off-diagonal elements. Now, let's minimize our QUBO objective function and find the optimum $x$ that results in a division of vertices with the greatest edge-cut size. Edge-cut size is a measure of the total number of edges crossed by a cut. **Identification of $x$** For our graph of 5 vertices, suppose we have the following cut:The cut partitions vertices 1, 4 & 5 in one group (assume it to be Group 1) and vertices 2 & 3 in the other group (Group 2). From our definition of the binary variable $x_i$, $x_1 = x_4 = x_5 = 0$ and $x_2 = x_3 = 1$. So, $x=(0,1,1,0,0)$. Similarly, we can identify $x$ for other possible cuts too. Task 2For our graph in Example 1, we have some possible cuts as shown below: Identify $x$ and the edge-cut size for each cut. Then, evaluate the value of the objective function with the identified $x$ using QUBO matrix formulation. [Click here for solution](QUBO_Examples_MaximumCut_Solutions.ipynbtask2)*** Task 3Input matrix Q calculated in Example 1 to the function *qubo_solver()* and determine $x$ which minimizes $x^T Qx$ and the corresponding minimum value.$$Q = \begin{pmatrix}-2 & 2 & 2 & 0 & 0\\0 & -2 & 0 & 2 & 0\\0 & 0 & -3 & 2 & 2\\0 & 0 & 0 & -3 & 2\\0 & 0 & 0 & 0 & -2\end{pmatrix}$$ ###Code # Access the qubo_solver() function %run qubo_functions.py # Define the Q matrix # Pass the matrix as an argument to the function qubo_solver(Q) ###Output _____no_output_____ ###Markdown [Click here for solution](QUBO_Examples_MaximumCut_Solutions.ipynbtask3)*** Now, let's verify our result using the QUBO algebraic formulation. Task 4Using the QUBO algebraic expression and testing all possibilities of $x$ for the possible cuts, verify that the QUBO model in Example 1 has a minima at $x=(1,0,0,1,1)$ with a maximum edge-cut size of 5. (You may use a Python code for this task). ###Code #Create a function to evaluate the value of objective function for each x. def maxcut_task_3(x): #INSERT YOUR CODE HERE! return y #Minimize the function for all possibilites of x. #The following code generates the possile permutations of x and calculates the value of the objectve funtion for each. import numpy as np import itertools possible_values = {} # Creating a dictionary vec_permutations = itertools.product([0,1], repeat=5) # A list of all the possible permutations for x vector for permutation in vec_permutations: x = np.array([[var] for var in permutation]) # Converts the permutation into a column vector value = maxcut_task_3(x) # Call to the function possible_values[value[0]] = x # Storing vectors and values in dictionary vector = tuple(x.T[0]) print("Vector x =", vector, "; Value =",int(value)) min_value = min(possible_values.keys()) # Lowest value of the objective function opt_vector = tuple(possible_values[min_value].T[0]) # Optimum x corresponding to lowest value print("---") print("The vector x =", opt_vector, "minimizes the objective function to a value of", int(min_value)) ###Output _____no_output_____ ###Markdown [Click here for solution](QUBO_Examples_MaximumCut_Solutions.ipynbtask4)*** Task 5Let's assume we have a simple network of 5 vertices and 7 edges. Using the QUBO expression $\min \sum_{(i,j) \in E} (-x_i-x_j+2x_ix_j)$, determine the matrix Q for this graph. [Click here for solution](QUBO_Examples_MaximumCut_Solutions.ipynbtask5)*** Task 6Repeat Task 3 for the matrix Q calculated in Task 5. ###Code #Access the qubo_solver() function %run qubo_functions.py # Define the Q matrix #Assign it the name Q2 # Pass the matrix as an argument to the function qubo_solver(Q2) ###Output _____no_output_____ ###Markdown [Click here for solution](QUBO_Examples_MaximumCut_Solutions.ipynbtask6)*** Task 7Using the QUBO algebraic expression, verify your result for Task 6. (You may use a Python code for this task). ###Code #Create a function to evaluate the value of objective function for each x. def maxcut_task_7(x): #INSERT YOUR CODE HERE! return y #Minimize the function for all possibilites of x. #The following code generates the possile permutations of x and calculates the value of the objectve funtion for each. import numpy as np import itertools possible_values_7 = {} vec_permutations = itertools.product([0,1], repeat=5) # A list of all the possible permutations for x vector for permutation in vec_permutations: x = np.array([[var] for var in permutation]) # Converts the permutation into a column vector value = maxcut_task_7(x) possible_values_7[value[0]] = x vector = tuple(x.T[0]) # print("Vector x =", vector, "; Value =",int(value)) # Displays every vector with its corresponding value min_value = min(possible_values_7.keys()) # Lowest value of the objective function opt_vector = tuple(possible_values_7[min_value].T[0]) # Optimum x corresponding to lowest value print("---") print("The vector x =", opt_vector, "minimizes the objective function to a value of", int(min_value)) ###Output _____no_output_____
PQC_test.ipynb
###Markdown expressibility test ###Code expressibility(pqc, 5000) ###Output 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000 4100 4200 4300 4400 4500 4600 4700 4800 4900 ###Markdown entangling capability 계산 ###Code entangling_capability(pqc, 1000) Circ4 =PQC("circ4", 4); Circ4.circ.draw('mpl') expressibility(Circ4, 1000) entangling_capability(Circ4, 1000) ###Output _____no_output_____ ###Markdown expressibility test ###Code expressibility(pqc, 1000) ###Output 100 200 300 400 500 600 700 800 900 ###Markdown entangling capability 계산 ###Code entangling_capability(pqc, 1000) Circ4 =PQC("circ4", 4); Circ4.circ.draw('mpl') expressibility(Circ4, 1000) entangling_capability(Circ4, 1000) ###Output _____no_output_____ ###Markdown expressibility test entangling capability 계산 ###Code entangling_capability(pqc, 1000) ###Output 100 200 300 400 500 600 700 800 900
3_uq/3_rc_noise/xx_rc_combination_noise_sample.ipynb
###Markdown Method: RC Dataset: Lorenz-96, F = 8 Purpose: Uncertainty Quantification - Mean Variance Estimation Deep Ensemble 1. Set-up ###Code # GPU import os os.environ["CUDA_VISIBLE_DEVICES"] = "3" # Package import sys sys.path.append("../..") from create_data import load_data from utils import * # Number of testing samples import numpy as np import matplotlib.pyplot as plt from time import time from scipy import sparse import jax import jax.numpy as jnp from jax import value_and_grad from jax.numpy import tanh from jax.example_libraries import optimizers train, test = load_data("Lorenz 96, F = 8", "../../data/lorenz8", 0.5) np.random.seed(1) train.data = train.data + np.random.normal(0, 1e-1, train.data.shape) print(f"Train size: {train.data.shape}") print(f"Test size: {test.data.shape}") ###Output Train size: (90000, 40) Test size: (90000, 40) ###Markdown **Create test set** ###Code L_forecast_test = 400 # steps to forecast forward (when testing) np.random.seed(1) data_test = test.data T_test, data_dim = data_test.shape possible_idx = T_test - (L_forecast_test + 1) # minus number of steps forward, and the warm-up period T_indices = np.random.randint(0, possible_idx, size = NUM_TEST) t_past_batch = np.repeat(T_indices[:, None], WARM_UP_TEST, axis = 1).astype(int) # 200 warmup t_pred_batch = (T_indices[:, None] + np.arange(1, 1 + L_forecast_test)[None, :].astype(int)) X_test = data_test[t_past_batch] y_test = data_test[t_pred_batch] print(f"Test input size: {X_test.shape}") # Number of test points x input length x dim print(f"Test output size: {y_test.shape}") # Number of test points x horizon x dim ###Output Test input size: (100, 2000, 40) Test output size: (100, 400, 40) ###Markdown 2. RC Implementation ###Code def get_parameters(nn_size, connectivity, spec_radius, lambd, seed, batch_size, num_epoch, lr_schedule = [1e-4], early_stopping = EARLY_STOPPING): """ Returns trained parameters (beta, intercept) and hidden layer values """ def initialize_coef(): """ Initializes W_in and W. W_in size = nn_size x data_dim W size = nn_size x nn_size """ start = time() # Generate input -> hidden unit weights W_in = 2 * (np.random.rand(nn_size, data_dim) - 0.5) W_in = W_in / (4 * np.sqrt(data_dim)) # Generate hidden -> hidden unit weights # Considers connectivity to make the matrix sparse start_mat = time() rows = np.concatenate([np.full(connectivity, i) for i in range(nn_size)]) cols = np.concatenate([np.random.choice(range(nn_size), size = connectivity, replace = False) for _ in range(nn_size)]) vals = np.random.uniform(low = -omega, high = omega, size = (nn_size * connectivity)) W = sparse.csr_matrix((vals, (rows, cols)), shape = (nn_size, nn_size)) end_mat = time() print(f"W generated. Time taken: {end_mat - start_mat:.2f}s") # Calculate eigenvalues for scaling of matrix print("Calculating eigenvalue") e_start = time() eigenvals = sparse.linalg.eigs(W, which = "LM", return_eigenvectors = False, k = 1) max_eigen = np.abs(eigenvals) e_end = time() print(f"Eigenvalue calculated. Time taken: {e_end - e_start:.2f}s") # Scale matrix by spectral radius W = W / max_eigen * spec_radius # scale the matrix W by its spectral radius W = sparse.csr_matrix(W) end = time() print(f"W and W_in generated. Time taken: {end-start:.2f}s") print() return W_in, W def generate_hidden_states(W_in, W): """ Generate hidden states (z) values hidden_states size = data_size x nn_size """ start = time() print("Generating z values...") indiv_z = np.zeros(shape = nn_size) hidden_states = np.zeros((train_size, nn_size)) for t in range(train_size): indiv_z = (1 - alpha) * indiv_z + \ alpha * np.tanh(W_in @ x[t] + W @ indiv_z) hidden_states[t, :] = indiv_z end = time() print(f"z values generated. Time taken: {end-start:.2f}s") return hidden_states def mse(y, y_pred): return jnp.mean((y_pred - y)**2) @jax.jit def neg_log_LH(params, x, y): """ returns negative-log-likelihood -logLH(P(y|params)) """ d = data_dim beta, intercept, beta2, intercept2 = params mu = x @ beta + intercept # train_size x data_dim log_sigma = (x @ beta2 + intercept2).mean() # train_size x 1 sigma = jnp.exp(log_sigma) mu_loss = mse(mu, y) constant = d * jnp.log(2 * jnp.pi) sigma_loss = d * log_sigma return 0.5*(constant + sigma_loss + (mu_loss / sigma**2)) def training(x, y): """ Trains regression of y~x using SGD. Returns parameters (beta, intercept, beta2, intercept2) where beta, intercept -> weights to determine the mean beta2, intecept2 -> weights to determine log_sigma beta size = nn_size x data_dim intercept = data_dim (will be added for each training data) beta2 size = nn_size x 1 intercept2 = 1 (will be added for each training data) should predict a mu with train_size x data_dim (\mu per dimension per datapoint) and a sigma with train_size x 1 (single \sigma for all dimensions per datapoint) """ @jax.jit def step(opt_state, x, y): params = get_params(opt_state) value, g = value_and_grad(neg_log_LH)(params, x, y) opt_state = opt_update(0, g, opt_state) return get_params(opt_state), opt_state, value start = time() # Plot loss loss_train_traj = [] loss_train_all_traj = [] # Init parameters beta = np.random.normal(0, 1 / np.sqrt(nn_size), size = (nn_size, data_dim)) beta2 = np.random.normal(0, 1 / np.sqrt(nn_size), size = (nn_size, 1)) intercept = np.random.normal(0, 1 / np.sqrt(nn_size * 2), size = (data_dim, )) intercept2 = np.random.normal(0, 1 / np.sqrt(nn_size * 2), size = (1, )) t_size = int(1. * train_size) overall_best_loss = 9999999 for i, lr in enumerate(lr_schedule): opt_init, opt_update, get_params = optimizers.adam(step_size = lr) opt_state = opt_init([beta, intercept, beta2, intercept2]) # For early stopping best_state = opt_state counter = 0 best_val_loss = 9999999 for epoch in range(num_epoch[i]): e_start = time() T_indices = np.arange(train_size) np.random.shuffle(T_indices) loss_epoch_train = [] for k in range(t_size // batch_size + 1): t_start = T_indices[np.arange(k * batch_size, (k+1) * batch_size).astype(int) % len(T_indices)] x_batch = x[t_start] y_batch = y[t_start] params, opt_state, l = step(opt_state, x_batch, y_batch) loss_epoch_train.append(l) loss_train_all_traj += loss_epoch_train mse_train = np.mean(loss_epoch_train) # -ve log likelihood loss_train_traj.append(mse_train) e_end = time() if mse_train < best_val_loss: best_val_loss = mse_train counter = 0 best_state = opt_state else: counter += 1 if (epoch + 1) % 10 == 0: print(f"Epoch {epoch + 1}: Train time = {e_end - e_start:.2f} | Train Loss = {mse_train:.7f}", end = " ") print() if counter == early_stopping: print(f"EARLY STOPPING. Epoch {epoch + 1}: Train loss = {mse_train:.7f}") break print(f"Best Training Loss : {best_val_loss:.7f}") if best_val_loss < overall_best_loss: print("IMPROVED VALIDATION LOSS") overall_best_loss = best_val_loss overall_best_state = best_state beta, intercept, beta2, intercept2 = get_params(overall_best_state) print() end = time() print(f"Total time: {end - start:.2f}") return get_params(overall_best_state) # beta, intercept, beta2, intercept2 start = time() x, y = train.data[:-1], train.data[1:] copy_x, copy_y = x, y train_size, data_dim = x.data.shape np.random.seed(seed) W_in, W = initialize_coef() z = generate_hidden_states(W_in, W) # Want to regression Y ~ X ==> Y ~ [z, z**2] final_y = y[transient:] final_z = z[transient:] print("Concatenating z with z**2", end = " "); concat_start = time() final_z = np.concatenate([final_z, final_z**2], axis = 1) # shape: train_size x (nn_size*2) concat_end = time() print(f"Contenation complete. Time taken: {concat_end-concat_start:.2f}s", end = "\n\n") train_size, nn_size = final_z.shape params = training(final_z, final_y) end = time() print(f"Complete. Time taken: {end - start:.2f}s") return params, (final_z, W_in, W) def get_test_pred(data_test, nn_size, params, W_in, W): beta, intercept, beta2, intercept2 = params num_data_test, trans, data_dim = data_test.shape # testing ex, # steps used (transient), dim of data def prediction(inp): """ Returns the mean of one of the testing input mean will be a length_to_test x data_dim vector """ z = np.zeros((nn_size, )) for i in range(trans): z = (1 - alpha) * z + alpha * np.tanh(W_in @ inp[i] + W @ z) mus = [] stddevs = [] x = beta.T @ np.concatenate([z, z**2]) + intercept # output / input_of_next | size = dim_data log_sd = beta2.T @ np.concatenate([z, z**2]) + intercept2 # log_sd of output | size = 1 mus.append(x) stddevs.append(jnp.exp(log_sd[0])) for _ in range(L_forecast_test - 1): z = (1 - alpha) * z + alpha * np.tanh(W_in @ x + W @ z) x = beta.T @ np.concatenate([z, z**2]) + intercept # output / input_of_next log_sd = beta2.T @ np.concatenate([z, z**2]) + intercept2 mus.append(x) stddevs.append(jnp.exp(log_sd[0])) return mus, stddevs start = time() mean_list = [] sd_list = [] for i in range(num_data_test): mean, sd = prediction(data_test[i]) mean_list.append(mean) sd_list.append(sd) if (i+1) % 10 == 0: print(f"{(i+1) / num_data_test * 100:.2f}% done") end = time() print(f"Testing complete. Time taken: {end - start:.2f}") return np.array(mean_list), np.array(sd_list) def neg_log_LH(mean_pred, sd_pred): d = data_dim constant_loss = d * np.log(2 * np.pi) mu_loss = (mean_pred - y_test)**2 if len(sd_pred.shape) == 2: sd_expanded = np.moveaxis(np.tile(sd_pred, (d, 1, 1)), 0, 2)# Repeat sd for each of the 40 dimensions elif len(sd_pred.shape) == 3: sd_expanded = sd_pred else: raise Exception("Invalid sd_pred dimension") return 0.5 * (constant_loss + d * np.log(sd_expanded) + (mu_loss / sd_expanded**2)).mean(axis = (0, 2)) def get_test_pred_sampled(data_test, params, W_in, W, seed): np.random.seed(seed) start = time() beta, intercept, beta2, intercept2 = params num_data_test, trans, data_dim = data_test.shape # testing ex, # steps used (transient), dim of data def generate_hidden_state(data): z = np.zeros((nn_size, )) for i in range(trans): z = (1 - alpha) * z + alpha * np.tanh(W_in @ data[i] + W @ z) return z test_mus = [] test_sds = [] counter = 0 for inp in data_test: z = generate_hidden_state(inp) first_mean = beta.T @ np.concatenate([z, z**2]) + intercept first_log_sd = beta2.T @ np.concatenate([z, z**2]) + intercept2 first_sd = np.exp(first_log_sd[0]) x = first_mean all_mu = [] all_sd = [] for tr in range(N_TRAJ_MVE // 5): x = np.random.normal(first_mean, first_sd) mu_list = [x] sd_list = [first_sd] for _ in range(L_forecast_test - 1): z = (1 - alpha) * z + alpha * np.tanh(W_in @ x + W @ z) x = beta.T @ np.concatenate([z, z**2]) + intercept # output / input_of_next log_sd = beta2.T @ np.concatenate([z, z**2]) + intercept2 mu_list.append(x) sd_list.append(np.exp(log_sd[0])) all_mu.append(np.array(mu_list)) all_sd.append(np.array(sd_list)) test_mus.append(np.array(all_mu)) test_sds.append(np.array(all_sd)) counter += 1 if counter % 5 == 0: print(f"{counter / num_data_test * 100:.2f}% done") end = time() print(f"Time taken: {end - start:.2f}") return np.array(test_mus), np.array(test_sds) ###Output _____no_output_____ ###Markdown 3. Best Parameters ###Code nn_size = 12000 ridge_penalty = 1e-6 spec_radius = 0.1 connectivity = 4 lr_list = [1e-4] epoch_list = [300] transient = 200 # points to ignore to allow system to stabilise omega = 1 # scale of the values of matrix W alpha = 1 # hidden state memory b_size = 200 ###Output _____no_output_____ ###Markdown 4. MVE Ensemble ###Code res_folder = os.path.join("results", "combined_noise") def run_seed(seed): """ Runs the experiment with optimal parameters and saves the predictions into a file """ params, internal = get_parameters(nn_size, connectivity, spec_radius, lambd = ridge_penalty, seed = seed, batch_size = b_size, num_epoch = epoch_list, lr_schedule = lr_list) _, W_in, W = internal mean_pred_all, sd_pred_all = get_test_pred_sampled(X_test, params, W_in, W, seed) mean_pred = mean_pred_all.mean(axis = 1) sd_pred = np.sqrt((np.moveaxis(np.tile(sd_pred_all, (40, 1, 1, 1)), 0, 3)**2 + mean_pred_all**2).mean(axis = 1) - mean_pred**2) file_name = "mu_preds_" + str(seed) + ".pkl" file_name_2 = "sd_preds_" + str(seed) + ".pkl" save_obj(mean_pred, res_folder, file_name) save_obj(sd_pred, res_folder, file_name_2) ###Output _____no_output_____ ###Markdown 4.1 Seed 2 ###Code run_seed(2) ###Output W generated. Time taken: 31.39s Calculating eigenvalue Eigenvalue calculated. Time taken: 3.52s W and W_in generated. Time taken: 38.67s Generating z values... z values generated. Time taken: 82.69s Concatenating z with z**2 Contenation complete. Time taken: 12.88s Epoch 10: Train time = 11.22 | Train Loss = -22.9251595 Epoch 20: Train time = 11.73 | Train Loss = -23.3332272 Epoch 30: Train time = 11.25 | Train Loss = -23.6678562 Epoch 40: Train time = 11.01 | Train Loss = -23.9424095 Epoch 50: Train time = 11.31 | Train Loss = -24.1698895 Epoch 60: Train time = 12.08 | Train Loss = -24.3485584 Epoch 70: Train time = 11.47 | Train Loss = -24.5028934 Epoch 80: Train time = 11.78 | Train Loss = -24.6188679 Epoch 90: Train time = 11.32 | Train Loss = -24.7147465 Epoch 100: Train time = 11.46 | Train Loss = -24.7942696 Epoch 110: Train time = 11.37 | Train Loss = -24.8516121 Epoch 120: Train time = 11.41 | Train Loss = -24.8999767 Epoch 130: Train time = 11.88 | Train Loss = -24.9390507 Epoch 140: Train time = 11.92 | Train Loss = -24.9723568 Epoch 150: Train time = 11.26 | Train Loss = -24.9982147 Epoch 160: Train time = 11.47 | Train Loss = -25.0150204 Epoch 170: Train time = 11.18 | Train Loss = -25.0327549 Epoch 180: Train time = 11.16 | Train Loss = -25.0474911 Epoch 190: Train time = 11.79 | Train Loss = -25.0596466 Epoch 200: Train time = 11.81 | Train Loss = -25.0661583 Epoch 210: Train time = 10.47 | Train Loss = -25.0742149 Epoch 220: Train time = 11.61 | Train Loss = -25.0837536 Epoch 230: Train time = 11.52 | Train Loss = -25.0911121 Epoch 240: Train time = 11.68 | Train Loss = -25.0968971 Epoch 250: Train time = 11.30 | Train Loss = -25.0991745 Epoch 260: Train time = 11.82 | Train Loss = -25.1038895 Epoch 270: Train time = 11.80 | Train Loss = -25.1098576 Epoch 280: Train time = 11.94 | Train Loss = -25.1103573 Epoch 290: Train time = 11.49 | Train Loss = -25.1137810 Epoch 300: Train time = 11.77 | Train Loss = -25.1192589 Best Training Loss : -25.1192589 IMPROVED VALIDATION LOSS Total time: 3476.73 Complete. Time taken: 3611.52s 5.00% done 10.00% done 15.00% done 20.00% done 25.00% done 30.00% done 35.00% done 40.00% done 45.00% done 50.00% done 55.00% done 60.00% done 65.00% done 70.00% done 75.00% done 80.00% done 85.00% done 90.00% done 95.00% done 100.00% done Time taken: 3739.71 ###Markdown 4.2 Seed 4 ###Code run_seed(4) ###Output W generated. Time taken: 20.03s Calculating eigenvalue Eigenvalue calculated. Time taken: 10.89s W and W_in generated. Time taken: 34.04s Generating z values... z values generated. Time taken: 44.82s Concatenating z with z**2 Contenation complete. Time taken: 12.33s Epoch 10: Train time = 10.61 | Train Loss = -22.9272251 Epoch 20: Train time = 10.55 | Train Loss = -23.3383713 Epoch 30: Train time = 10.54 | Train Loss = -23.6743717 Epoch 40: Train time = 10.62 | Train Loss = -23.9476070 Epoch 50: Train time = 10.62 | Train Loss = -24.1759472 Epoch 60: Train time = 10.49 | Train Loss = -24.3554420 Epoch 70: Train time = 10.45 | Train Loss = -24.5061722 Epoch 80: Train time = 10.47 | Train Loss = -24.6273193 Epoch 90: Train time = 10.45 | Train Loss = -24.7201004 Epoch 100: Train time = 10.46 | Train Loss = -24.7961140 Epoch 110: Train time = 10.47 | Train Loss = -24.8592091 Epoch 120: Train time = 10.52 | Train Loss = -24.9023170 Epoch 130: Train time = 10.65 | Train Loss = -24.9416714 Epoch 140: Train time = 10.62 | Train Loss = -24.9747944 Epoch 150: Train time = 10.58 | Train Loss = -24.9996471 Epoch 160: Train time = 10.59 | Train Loss = -25.0169086 Epoch 170: Train time = 10.72 | Train Loss = -25.0365486 Epoch 180: Train time = 10.71 | Train Loss = -25.0506210 Epoch 190: Train time = 10.67 | Train Loss = -25.0605125 Epoch 200: Train time = 11.33 | Train Loss = -25.0700512 Epoch 210: Train time = 11.41 | Train Loss = -25.0799866 Epoch 220: Train time = 11.73 | Train Loss = -25.0853729 Epoch 230: Train time = 10.59 | Train Loss = -25.0932140 Epoch 240: Train time = 10.62 | Train Loss = -25.0928822 Epoch 250: Train time = 10.51 | Train Loss = -25.1008224 Epoch 260: Train time = 10.57 | Train Loss = -25.1041031 Epoch 270: Train time = 10.52 | Train Loss = -25.1082535 Epoch 280: Train time = 10.65 | Train Loss = -25.1118679 Epoch 290: Train time = 10.61 | Train Loss = -25.1182270 Epoch 300: Train time = 10.53 | Train Loss = -25.1198902 Best Training Loss : -25.1206436 IMPROVED VALIDATION LOSS Total time: 3183.81 Complete. Time taken: 3275.46s 5.00% done 10.00% done 15.00% done 20.00% done 25.00% done 30.00% done 35.00% done 40.00% done 45.00% done 50.00% done 55.00% done 60.00% done 65.00% done 70.00% done 75.00% done 80.00% done 85.00% done 90.00% done 95.00% done 100.00% done Time taken: 3273.54 ###Markdown 4.3 Seed 6 ###Code run_seed(6) ###Output W generated. Time taken: 19.91s Calculating eigenvalue Eigenvalue calculated. Time taken: 0.87s W and W_in generated. Time taken: 23.79s Generating z values... z values generated. Time taken: 46.30s Concatenating z with z**2 Contenation complete. Time taken: 12.80s Epoch 10: Train time = 10.55 | Train Loss = -22.9121494 Epoch 20: Train time = 10.44 | Train Loss = -23.3158188 Epoch 30: Train time = 10.45 | Train Loss = -23.6520786 Epoch 40: Train time = 10.46 | Train Loss = -23.9250031 Epoch 50: Train time = 10.50 | Train Loss = -24.1502171 Epoch 60: Train time = 10.55 | Train Loss = -24.3364277 Epoch 70: Train time = 10.43 | Train Loss = -24.4847870 Epoch 80: Train time = 10.52 | Train Loss = -24.6057301 Epoch 90: Train time = 10.51 | Train Loss = -24.7036362 Epoch 100: Train time = 10.57 | Train Loss = -24.7802105 Epoch 110: Train time = 10.52 | Train Loss = -24.8450794 Epoch 120: Train time = 10.48 | Train Loss = -24.8915462 Epoch 130: Train time = 10.57 | Train Loss = -24.9367123 Epoch 140: Train time = 10.59 | Train Loss = -24.9637394 Epoch 150: Train time = 10.49 | Train Loss = -24.9943676 Epoch 160: Train time = 10.44 | Train Loss = -25.0121307 Epoch 170: Train time = 10.46 | Train Loss = -25.0292702 Epoch 180: Train time = 10.54 | Train Loss = -25.0448818 Epoch 190: Train time = 10.55 | Train Loss = -25.0538654 Epoch 200: Train time = 10.46 | Train Loss = -25.0634823 Epoch 210: Train time = 11.30 | Train Loss = -25.0726891 Epoch 220: Train time = 11.12 | Train Loss = -25.0801964 Epoch 230: Train time = 10.46 | Train Loss = -25.0870285 Epoch 240: Train time = 10.48 | Train Loss = -25.0933285 Epoch 250: Train time = 10.44 | Train Loss = -25.0966988 Epoch 260: Train time = 10.47 | Train Loss = -25.1018162 Epoch 270: Train time = 10.44 | Train Loss = -25.1033001 Epoch 280: Train time = 10.45 | Train Loss = -25.1096992 Epoch 290: Train time = 10.44 | Train Loss = -25.1165161 Epoch 300: Train time = 10.43 | Train Loss = -25.1137981 Best Training Loss : -25.1176052 IMPROVED VALIDATION LOSS Total time: 3164.20 Complete. Time taken: 3247.54s 5.00% done 10.00% done 15.00% done 20.00% done 25.00% done 30.00% done 35.00% done 40.00% done 45.00% done 50.00% done 55.00% done 60.00% done 65.00% done 70.00% done 75.00% done 80.00% done 85.00% done 90.00% done 95.00% done 100.00% done Time taken: 3164.22 ###Markdown 4.4 Seed 8 ###Code run_seed(8) ###Output W generated. Time taken: 21.25s Calculating eigenvalue Eigenvalue calculated. Time taken: 3.27s W and W_in generated. Time taken: 27.57s Generating z values... z values generated. Time taken: 44.17s Concatenating z with z**2 Contenation complete. Time taken: 10.61s Epoch 10: Train time = 9.08 | Train Loss = -22.9297047 Epoch 20: Train time = 9.15 | Train Loss = -23.3345394 Epoch 30: Train time = 9.20 | Train Loss = -23.6681709 Epoch 40: Train time = 9.16 | Train Loss = -23.9471054 Epoch 50: Train time = 9.16 | Train Loss = -24.1755581 Epoch 60: Train time = 9.35 | Train Loss = -24.3570347 Epoch 70: Train time = 9.12 | Train Loss = -24.5010033 Epoch 80: Train time = 9.08 | Train Loss = -24.6210518 Epoch 90: Train time = 9.17 | Train Loss = -24.7184849 Epoch 100: Train time = 9.07 | Train Loss = -24.7930870 Epoch 110: Train time = 9.04 | Train Loss = -24.8574944 Epoch 120: Train time = 9.10 | Train Loss = -24.9039974 Epoch 130: Train time = 9.08 | Train Loss = -24.9395752 Epoch 140: Train time = 9.10 | Train Loss = -24.9761009 Epoch 150: Train time = 9.03 | Train Loss = -24.9941692 Epoch 160: Train time = 9.11 | Train Loss = -25.0178833 Epoch 170: Train time = 9.14 | Train Loss = -25.0311241 Epoch 180: Train time = 9.05 | Train Loss = -25.0480747 Epoch 190: Train time = 9.10 | Train Loss = -25.0643101 Epoch 200: Train time = 9.10 | Train Loss = -25.0707169 Epoch 210: Train time = 9.02 | Train Loss = -25.0755806 Epoch 220: Train time = 9.05 | Train Loss = -25.0848846 Epoch 230: Train time = 9.04 | Train Loss = -25.0852814 Epoch 240: Train time = 9.04 | Train Loss = -25.0929070 Epoch 250: Train time = 8.99 | Train Loss = -25.1018448 Epoch 260: Train time = 8.99 | Train Loss = -25.1042461 Epoch 270: Train time = 9.08 | Train Loss = -25.1076641 Epoch 280: Train time = 9.05 | Train Loss = -25.1128063 Epoch 290: Train time = 9.07 | Train Loss = -25.1142483 Epoch 300: Train time = 9.06 | Train Loss = -25.1170845 Best Training Loss : -25.1194763 IMPROVED VALIDATION LOSS Total time: 2724.67 Complete. Time taken: 2807.50s 5.00% done 10.00% done 15.00% done 20.00% done 25.00% done 30.00% done 35.00% done 40.00% done 45.00% done 50.00% done 55.00% done 60.00% done 65.00% done 70.00% done 75.00% done 80.00% done 85.00% done 90.00% done 95.00% done 100.00% done Time taken: 3211.77 ###Markdown 4.5 Seed 42 ###Code run_seed(42) ###Output W generated. Time taken: 18.93s Calculating eigenvalue Eigenvalue calculated. Time taken: 3.13s W and W_in generated. Time taken: 25.09s Generating z values... z values generated. Time taken: 45.08s Concatenating z with z**2 Contenation complete. Time taken: 11.09s Epoch 10: Train time = 9.01 | Train Loss = -22.9240894 Epoch 20: Train time = 9.06 | Train Loss = -23.3326111 Epoch 30: Train time = 9.02 | Train Loss = -23.6687527 Epoch 40: Train time = 9.09 | Train Loss = -23.9453468 Epoch 50: Train time = 9.06 | Train Loss = -24.1672344 Epoch 60: Train time = 9.05 | Train Loss = -24.3530941 Epoch 70: Train time = 9.05 | Train Loss = -24.4989910 Epoch 80: Train time = 9.06 | Train Loss = -24.6193314 Epoch 90: Train time = 9.04 | Train Loss = -24.7118816 Epoch 100: Train time = 9.03 | Train Loss = -24.7892590 Epoch 110: Train time = 9.04 | Train Loss = -24.8515968 Epoch 120: Train time = 9.07 | Train Loss = -24.9018784 Epoch 130: Train time = 9.05 | Train Loss = -24.9405842 Epoch 140: Train time = 9.08 | Train Loss = -24.9708633 Epoch 150: Train time = 9.11 | Train Loss = -24.9939232 Epoch 160: Train time = 9.11 | Train Loss = -25.0173569 Epoch 170: Train time = 9.14 | Train Loss = -25.0338898 Epoch 180: Train time = 9.07 | Train Loss = -25.0469151 Epoch 190: Train time = 9.01 | Train Loss = -25.0604038 Epoch 200: Train time = 9.06 | Train Loss = -25.0665722 Epoch 210: Train time = 9.09 | Train Loss = -25.0730114 Epoch 220: Train time = 9.04 | Train Loss = -25.0823269 Epoch 230: Train time = 9.07 | Train Loss = -25.0868168 Epoch 240: Train time = 9.06 | Train Loss = -25.0952644 Epoch 250: Train time = 9.10 | Train Loss = -25.0994129 Epoch 260: Train time = 9.04 | Train Loss = -25.1006927 Epoch 270: Train time = 9.05 | Train Loss = -25.1064510 Epoch 280: Train time = 9.09 | Train Loss = -25.1096573 Epoch 290: Train time = 9.13 | Train Loss = -25.1164894 Epoch 300: Train time = 9.04 | Train Loss = -25.1197014 Best Training Loss : -25.1201534 IMPROVED VALIDATION LOSS Total time: 2719.21 Complete. Time taken: 2800.93s 5.00% done 10.00% done 15.00% done 20.00% done 25.00% done 30.00% done 35.00% done 40.00% done 45.00% done 50.00% done 55.00% done 60.00% done 65.00% done 70.00% done 75.00% done 80.00% done 85.00% done 90.00% done 95.00% done 100.00% done Time taken: 3211.85 ###Markdown 4.6 Compilation ###Code mu_preds = [] sd_preds = [] for dirpath, dirnames, filenames in os.walk(res_folder): for f in filenames: if f[:2] == "mu": mu_preds.append(load_obj(os.path.join(res_folder, f))) elif f[:2] == "sd": sd_preds.append(load_obj(os.path.join(res_folder, f))) mu_preds = np.array(mu_preds) sd_preds = np.array(sd_preds) print(f"mean preds shape: {mu_preds.shape}") print(f"sd preds shape: {sd_preds.shape}") ###Output mean preds shape: (5, 100, 400, 40) sd preds shape: (5, 100, 400, 40) ###Markdown 5. Analyze results 5.1 MSE ###Code mve_s_mean = mu_preds.mean(axis = 0) mve_s_sigma = np.sqrt((sd_preds**2 + mu_preds**2).mean(axis = 0) - mve_s_mean**2) desc_name = "rc_nn" + str(nn_size) + "_combined" res_mve = PointExperimentResult(mve_s_mean - y_test, desc_name) res_mve.plot_rmse(error_thresh = 0.5) res_mve.get_loss([0.2, 0.5, 1, 2, 3]) ###Output Median NRMSE at t = 0.2: 0.382 Median NRMSE at t = 0.5: 0.627 Median NRMSE at t = 1: 0.849 Median NRMSE at t = 2: 0.963 Median NRMSE at t = 3: 0.981 ###Markdown 5.2 Variance **Visualise for one dataset** ###Code idx = 0 plt.plot(mve_s_sigma[0].mean(axis = 1)**2) plt.grid("on") plt.xlabel("Time steps") plt.ylabel("Variance") plt.show() ###Output _____no_output_____ ###Markdown 5.3 Negative Log LH ###Code plt.plot(neg_log_LH(mve_s_mean, mve_s_sigma)) plt.title("Negative Log LH against time") plt.xlabel("Time steps") plt.ylabel("Negative Log LH") # plt.yscale("log") plt.grid("on") plt.show() print(f"Mean negative log LH: {neg_log_LH(mve_s_mean, mve_s_sigma).mean():.5f}") ###Output Mean negative log LH: 32.69201
Write A Data Science Blog Post - FIFA 2019 Players.ipynb
###Markdown Queries / Questions being investigated through the Data,As a football fan, I'm interested in exploring FIFA-2019 Complete Dataset of players. I will be trying to answer the following questionnaires through the data analysis I am gonna do here : Question 1: List of 25 nations in Decending Order, having more number of Soccer Players according the latest FIFA-2019 datasets available on the Kaggle website. Question 2: Listing 10 clubs in descending order, with the highest total player market value and the highest average player wage. Question 3: AGE-WISE distribution of the players present in FIFA-2019. Question 4: Selection of the best Squad according to the preferref position of the players. Question 5: FInding the correlation between features such as Age, Overall, Potential, Position, Club, Nationality, Special vs Value/Wage.Understanding these questions may provide some advice to the Football Club Manager, no matter in Video Game, or in real Professional football. Data UnderstandingThrough this project, we will use FIFA-2019 Complete Player Dataset from kaggle. For this project, I will use the data2019.csv which contains all the information of the Players of FIFA-2019. Here we go. ###Code # Read in the Dataset file "data2019" downloaded from Kaggle. data2019 = pd.read_csv('./data2019.csv') data2019.head() # Basic Description of the dataset data2019.describe() # Description of the datatypes of the dataset data2019.info() #Calculate the number of rows & Columns in the dataset num_rows = data2019.shape[0] num_cols = data2019.shape[1] print("Row Number: {}".format(num_rows)) print("Column Number: {}".format(num_cols)) ###Output Row Number: 18207 Column Number: 89 ###Markdown Data Preparation / PreprocessingWe gonna do the following steps as a part of Data Preparation:1. Check columns having missing values.2. Drop unused columns3. Conversion of string values into numbers for Value & Wage.4. One-Hot Encoding for Categorical variables such as Club, Nationality, Preferred Positions. ###Code # Data Preparation Step 1: Check whether any column has missing values columns_with_missing_values = set(data2019.columns[data2019.isnull().mean()!=0]) print(columns_with_missing_values) ###Output {'Strength', 'StandingTackle', 'Finishing', 'Dribbling', 'RM', 'LongPassing', 'Balance', 'HeadingAccuracy', 'RW', 'LongShots', 'RB', 'LW', 'Preferred Positions', 'ShotPower', 'Volleys', 'RCB', 'ST', 'Agility', 'Crossing', 'Work Rate', 'Positioning', 'Composure', 'GKPositioning', 'Jersey Number', 'CDM', 'Vision', 'Body Type', 'Joined', 'Preferred Foot', 'RCM', 'Curve', 'Loaned From', 'Height', 'LF', 'CM', 'RDM', 'Reactions', 'Jumping', 'Stamina', 'GKHandling', 'GKReflexes', 'RAM', 'GKDiving', 'Penalties', 'CF', 'LDM', 'LWB', 'Marking', 'LM', 'LB', 'Interceptions', 'Club', 'International Reputation', 'LCM', 'FKAccuracy', 'BallControl', 'Contract Valid Until', 'SprintSpeed', 'LAM', 'Real Face', 'Weight', 'RWB', 'SlidingTackle', 'Aggression', 'Release Clause', 'CB', 'LS', 'Skill Moves', 'Weak Foot', 'RS', 'ShortPassing', 'Acceleration', 'CAM', 'GKKicking', 'LCB', 'RF'} ###Markdown It can be observed that most of the columns with missing values are the ratings at all positions, because of Goalkeepers. And these columns except Club, have not been used in my questions. However for Club, it can be said that a player may not belong to any club for the moment. Therefore, any club insterested in him may sign this player without paying any transfer fees. ###Code # Data Preparation Step 2: Dropping columns which will not be used in this project data2019.drop('Photo', axis = 1,inplace=True) data2019.drop('Flag', axis = 1,inplace=True) data2019.drop('Club Logo', axis = 1,inplace=True) data2019.drop('ID', axis = 1,inplace=True) data2019.head() # Function to convert string values into numbers def str2number(amount): """ This function perform convertion from amount values in string type to float type numbers Parameter: amount(str): Amount values in string type with M & K as Abbreviation for Million and Thousands Returns: float: A float number represents the numerical value of the input parameter amount(str) """ if amount[-1] == 'M': return float(amount[1:-1])*1000000 elif amount[-1] == 'K': return float(amount[1:-1])*1000 else: return float(amount[1:]) # Data Preparation Step 3: Conversion of string values into numbers for Value & Wage data2019['Wage_Number'] = data2019['Wage'].map(lambda x: str2number(x)) data2019['Value_Number'] = data2019['Value'].map(lambda x: str2number(x)) # Data Preparation Step 4: One-Hot Encoding for Categorical variables such as Club, Nationality, Preferred Positions. le = LabelEncoder() data2019['Club_onehot_encode'] = le.fit_transform(data2019['Club'].astype(str)) data2019['Nationality_onehot_encode'] = le.fit_transform(data2019['Nationality'].astype(str)) data2019['Preferred Positions_onehot_encode'] = le.fit_transform(data2019['Preferred Positions'].astype(str)) ###Output _____no_output_____ ###Markdown Solution to the queries raised above: Since data preprocessing/preparation is over, we may try to the answer the questions as raised above: ###Code # Question 1: List of 25 nations in Decending Order, having more number of Soccer Players according the latest FIFA-2019 datasets available on the Kaggle website. nationality_vals = data2019.Nationality.value_counts() print(nationality_vals.head(25)) (nationality_vals.head(20)/data2019.shape[0]).plot(kind="bar"); plt.title("Top 25 FIFA-2019 Players Nationality Distribution(In %age)"); ###Output England 1662 Germany 1198 Spain 1072 Argentina 937 France 914 Brazil 827 Italy 702 Colombia 618 Japan 478 Netherlands 453 Sweden 397 China PR 392 Chile 391 Republic of Ireland 368 Mexico 366 United States 353 Poland 350 Norway 341 Saudi Arabia 340 Denmark 336 Korea Republic 335 Portugal 322 Turkey 303 Austria 298 Scotland 286 Name: Nationality, dtype: int64 ###Markdown We can derive from the above result and plot that England, Germany and Spain are the top 3 nations having more number of players in FIFA-2019. France is at 5th position which is also an European nation. It can be justified as Barclays Premier League, Bundesliga, La Liga and Ligue 1 are among the Five Football Leagues in Europe. These leagues represent the best football in Europe and even in the world, attracting many football stars and often guiding the new direction of football development. The 4th & 6th ranking is Agentina and Brazil which have the most gifted football players in the world. ###Code # Question 2: Listing 10 clubs in descending order, with the highest total player market value and the highest average player wage. Value_Wage_DF = data2019[["Name", "Club", "Value_Number", "Wage_Number"]] Value_Wage_DF.head(10) # Top 10 clubs with the highest average wage Value_Wage_DF.groupby("Club")["Wage_Number"].mean().sort_values(ascending=False).head(10).plot(kind="bar"); plt.title("Top 10 clubs with the highest average wage"); ###Output _____no_output_____ ###Markdown As observed from plot, Real Madrid, FC Barcelona & Juventus are the top3 clubs who is paying highest average wage to its players. High paying in wage also help these clubs to attract the most valuable players to play for the clubs. ###Code # Top 10 clubs with the highest total player market value Value_Wage_DF.groupby("Club")["Value_Number"].sum().sort_values(ascending=False).head(10).plot(kind="bar"); plt.title("Top 10 clubs with the highest total player market Value"); # Question 3: AGE-WISE distribution of the players present in FIFA-2019. age_vals = data2019.Age.value_counts() print(age_vals.head(20)) (age_vals.head(20)/data2019.shape[0]).plot(kind="bar"); plt.title("FIFA-2019 Players Age Distribution (In %age)"); ###Output 21 1423 26 1387 24 1358 22 1340 23 1332 25 1319 20 1240 27 1162 28 1101 19 1024 29 959 30 917 18 732 31 707 32 574 33 408 34 404 17 289 35 196 36 127 Name: Age, dtype: int64 ###Markdown It can be clearly understood from the plot that the most players are between 21–25 years old. People at this age group are the best years of athletes in lifetime. Players younger than that may not have enough skills and experiences, whereas players above that might have already retired from the football. So, there is a drop in those age groups. ###Code # Question 4: Selection of the best Squad according to the preferref position of the players. BestSquad_DF = data2019[['Name', 'Age', 'Overall', 'Potential', 'Preferred Positions']] BestSquad_DF.head(5) ###Output _____no_output_____ ###Markdown On the pattern of video game FIFA-2019, human player selects expert players for each position to win the matches. Based on FIFA-2019 dataset, we will try to find the best squad according to the preferresitions of the star players. Here, I have considered two best squads such as: Formation 4–3–3 and Formation 3–4–1–2. ###Code def find_best_squad(position): """ This function perform selection of the player with highest Overall Value for each provided position Parameter: position(str): a particular position of a certain footbal formation Returns: Position: The position from Input Parameter Player: The Best Player Name for this Position Overall: The Overall Value for this Best Player """ BestSquad_DF_copy = BestSquad_DF.copy() BestSquad = [] for i in position: BestSquad.append([i,BestSquad_DF_copy.loc[[BestSquad_DF_copy[BestSquad_DF_copy['Preferred Positions'] == i]['Overall'].idxmax()]]['Name'].to_string(index = False), BestSquad_DF_copy[BestSquad_DF_copy['Preferred Positions'] == i]['Overall'].max()]) BestSquad_DF_copy.drop(BestSquad_DF_copy[BestSquad_DF_copy['Preferred Positions'] == i]['Overall'].idxmax(), inplace = True) return pd.DataFrame(np.array(BestSquad).reshape(11,3), columns = ['Position', 'Player', 'Overall']).to_string(index = False) # Formation 4-3-3 squad_Formation433 = ['GK', 'LB', 'CB', 'CB', 'RB', 'LM', 'CDM', 'RM', 'LW', 'ST', 'RW'] print ('Best Squad of Formation 4-3-3') print (find_best_squad(squad_Formation433)) # Formation 3-4-1-2 squad_Formation3412 = ['GK', 'CB', 'CB', 'CB', 'LM', 'CM', 'CM', 'RM', 'CAM', 'ST', 'ST'] print ('Best Squad of Formation 3-4-1-2') print (find_best_squad(squad_Formation3412)) # Question 5: Finding the correlation between features such as Age, Overall, Potential, Position, Club, Nationality, Special vs Value/Wage. Correlation_DF = data2019[['Name', 'Age', 'Overall', 'Potential', 'Preferred Positions_onehot_encode', 'Club_onehot_encode', 'Nationality_onehot_encode', 'Special', 'Value_Number', 'Wage_Number']] Correlation_DF.corr() colormap = plt.cm.inferno plt.figure(figsize=(16,12)) plt.title('Correlation between Age, Overall, Potential, Position, Club, Nationality, Special vs Value/Wage', y=1.05, size=15) sns.heatmap(Correlation_DF.corr(),linewidths=0.1,vmax=1.0, square=True, cmap=colormap, linecolor='white', annot=True) ###Output _____no_output_____
how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.ipynb
###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.png) Authentication in Azure Machine LearningThis notebook shows you how to authenticate to your Azure ML Workspace using 1. Interactive Login Authentication 2. Azure CLI Authentication 3. Managed Service Identity (MSI) Authentication 4. Service Principal Authentication 5. Token Authentication The interactive authentication is suitable for local experimentation on your own computer. Azure CLI authentication is suitable if you are already using Azure CLI for managing Azure resources, and want to sign in only once. The MSI and Service Principal authentication are suitable for automated workflows, for example as part of Azure Devops build. ###Code from azureml.core import Workspace ###Output _____no_output_____ ###Markdown Interactive AuthenticationInteractive authentication is the default mode when using Azure ML SDK.When you connect to your workspace using workspace.from_config, you will get an interactive login dialog. ###Code ws = Workspace.from_config() ###Output _____no_output_____ ###Markdown Also, if you explicitly specify the subscription ID, resource group and workspace name, you will get the dialog. ###Code ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace") ###Output _____no_output_____ ###Markdown Note the user you're authenticated as must have access to the subscription and resource group. If you receive an error```AuthenticationException: You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription. All the subscriptions that you have access to = ...```check that the you used correct login and entered the correct subscription ID. In some cases, you may see a version of the error message containing text: ```All the subscriptions that you have access to = []```In such a case, you may have to specify the tenant ID of the Azure Active Directory you're using. An example would be accessing a subscription as a guest to a tenant that is not your default. You specify the tenant by explicitly instantiating _InteractiveLoginAuthentication_ with Tenant ID as argument. The Tenant ID can be found, for example, from https://portal.azure.com under **Azure Active Directory**, **Properties** as Directory ID. ###Code from azureml.core.authentication import InteractiveLoginAuthentication interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id") ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=interactive_auth) ###Output _____no_output_____ ###Markdown Despite having access to the workspace, you may sometimes see the following error when retrieving it:```You are currently logged-in to xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx tenant. You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription, please check if it is in this tenant.```This error sometimes occurs when you are trying to access a subscription to which you were recently added. In this case, you need to force authentication again to avoid using a cached authentication token that has not picked up the new permissions. You can do so by setting `force=true` on the `InteractiveLoginAuthentication()` object's constructor as follows: ###Code forced_interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id", force=True) ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=forced_interactive_auth) ###Output _____no_output_____ ###Markdown Azure CLI AuthenticationIf you have installed azure-cli package, and used ```az login``` command to log in to your Azure Subscription, you can use _AzureCliAuthentication_ class.Note that interactive authentication described above won't use existing Azure CLI auth tokens. ###Code from azureml.core.authentication import AzureCliAuthentication cli_auth = AzureCliAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=cli_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown MSI Authentication__Note__: _MSI authentication is supported only when using SDK from Azure Virtual Machine. The code below will fail on local computer._When using Azure ML SDK on Azure Virtual Machine (VM), you can use Managed Service Identity (MSI) based authentication. This mode allows the VM connect to the Workspace without storing credentials in the Python code.As a prerequisite, enable System-assigned Managed Identity for your VM as described in [Configure managed identities for Azure resources on a VM using the Azure portal](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm).Then, assign the VM access to your Workspace. For example from Azure Portal, navigate to your workspace, select __Access Control (IAM)__, __Add Role Assignment__, specify __Virtual Machine__ for __Assign Access To__ dropdown, and select your VM's identity.![msi assignment](images/msiaccess.PNG)After completing these steps, you can use authenticate using MsiAuthentication instance. ###Code from azureml.core.authentication import MsiAuthentication msi_auth = MsiAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=msi_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown Service Principal AuthenticationWhen setting up a machine learning workflow as an automated process, we recommend using Service Principal Authentication. This approach decouples the authentication from any specific user login, and allows managed access control.Note that you must have administrator privileges over the Azure subscription to complete these steps.The first step is to create a service principal. First, go to [Azure Portal](https://portal.azure.com), select **Azure Active Directory** and **App Registrations**. Then select **+New application**, give your service principal a name, for example _my-svc-principal_. You can leave other parameters as is.Then click **Register**.![service principal creation](images/svc-pr-1.PNG) From the page for your newly created service principal, copy the _Application ID_ and _Tenant ID_ as they are needed later.![application and tenant id](images/svc-pr-2.PNG) Then select **Certificates & secrets**, and **+New client secret** write a description for your key, and select duration. Then click **Add**, and copy the value of client secret to a secure location.![tenant id](images/svc-pr-3.PNG) Finally, you need to give the service principal permissions to access your workspace. Navigate to **Resource Groups**, to the resource group for your Machine Learning Workspace. Then select **Access Control (IAM)** and **Add a role assignment**. For _Role_, specify which level of access you need to grant, for example _Contributor_. Start entering your service principal name and once it is found, select it, and click **Save**.![add role](images/svc-pr-4.PNG) Now you are ready to use the service principal authentication. For example, to connect to your Workspace, see code below and enter your own values for tenant ID, application ID, subscription ID, resource group and workspace.**We strongly recommended that you do not insert the secret password to code**. Instead, you can use environment variables to pass it to your code, for example through Azure Key Vault, or through secret build variables in Azure DevOps. For local testing, you can for example use following PowerShell command to set the environment variable.```$env:AZUREML_PASSWORD = "my-password"``` ###Code import os from azureml.core.authentication import ServicePrincipalAuthentication svc_pr_password = os.environ.get("AZUREML_PASSWORD") svc_pr = ServicePrincipalAuthentication( tenant_id="my-tenant-id", service_principal_id="my-application-id", service_principal_password=svc_pr_password) ws = Workspace( subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=svc_pr ) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown See [Register an application with the Microsoft identity platform](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app) quickstart for more details about application registrations. Token AuthenticationWhen token generation and its refresh needs to be outside on AML SDK, we recommend using Token Authentication. It can be used for getting token for AML or ARM audience. Thus giving more granular control over token generated.This authentication class requires users to provide method `get_token_for_audience` which will be called to retrieve the token based on the audience passed.Audience that is passed to `get_token_for_audience` can be ARM or AML. Exact value that will be passed as audience will depend on cloud and type for audience. ###Code from azureml.core.authentication import TokenAuthentication, Audience # This is a sample method to retrieve token and will be passed to TokenAuthentication def get_token_for_audience(audience): from adal import AuthenticationContext client_id = "my-client-id" client_secret = "my-client-secret" tenant_id = "my-tenant-id" auth_context = AuthenticationContext("https://login.microsoftonline.com/{}".format(tenant_id)) resp = auth_context.acquire_token_with_client_credentials(audience,client_id,client_secret) token = resp["accessToken"] return token token_auth = TokenAuthentication(get_token_for_audience=get_token_for_audience) ws = Workspace( subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=token_auth ) print("Found workspace {} at location {}".format(ws.name, ws.location)) token_aml_audience = token_auth.get_token(Audience.aml) token_arm_audience = token_auth.get_token(Audience.arm) # Value of audience pass to `get_token_for_audience` can be retrieved as follows: # aud_aml_val = token_auth.get_aml_resource_id() # For AML # aud_arm_val = token_auth._cloud_type.endpoints.active_directory_resource_id # For ARM ###Output _____no_output_____ ###Markdown Token authentication object can be used to retrieve token for either AML or ARM audience,which can be used by other clients to authenticate to AML or ARM Using Secrets in Remote RunsSometimes, you may have to pass a secret to a remote run, for example username and password to authenticate against external data source.Azure ML SDK enables this use case through Key Vault associated with your workspace. The workflow for adding a secret is following.On local computer: 1. Read in a local secret, for example from environment variable or user input. To keep them secret, do not insert secret values into code as hard-coded strings. 2. Obtain a reference to the keyvault 3. Add the secret name-value pair in the key vault. The secret is then available for remote runs as shown further below.__Note__: The _azureml.core.keyvault.Keyvault_ is different from _azure.keyvault_ library. It is intended as simplified wrapper for setting, getting and listing user secrets in Workspace Key Vault. ###Code import uuid local_secret = os.environ.get("LOCAL_SECRET", default = str(uuid.uuid4())) # Use random UUID as a substitute for real secret. keyvault = ws.get_default_keyvault() keyvault.set_secret(name="secret-name", value = local_secret) ###Output _____no_output_____ ###Markdown The _set_secret_ method adds a new secret if one doesn't exist, or updates an existing one with new value.You can list secret names you've added. This method doesn't return the values of the secrets. ###Code keyvault.list_secrets() ###Output _____no_output_____ ###Markdown You can retrieve the value of the secret, and validate that it matches the original value. __Note__: This method returns the secret value. Take care not to write the the secret value to output. ###Code retrieved_secret = keyvault.get_secret(name="secret-name") local_secret==retrieved_secret ###Output _____no_output_____ ###Markdown In submitted runs on local and remote compute, you can use the get_secret method of Run instance to get the secret value from Key Vault. The method gives you a simple shortcut: the Run instance is aware of its Workspace and Keyvault, so it can directly obtain the secret without you having to instantiate the Workspace and Keyvault within remote run.__Note__: This method returns the secret value. Take care not to write the secret to output.For example, let's create a simple script _get_secret.py_ that gets the secret we set earlier. In an actual appication, you would use the secret, for example to access a database or other password-protected resource. ###Code %%writefile get_secret.py from azureml.core import Run run = Run.get_context() secret_value = run.get_secret(name="secret-name") print("Got secret value {} , but don't write it out!".format(len(secret_value) * "*")) ###Output _____no_output_____ ###Markdown Then, submit the script as a regular script run, and find the obfuscated secret value in run output. You can use the same approach to other kinds of runs, such as Estimator ones. ###Code from azureml.core import Experiment from azureml.core.script_run_config import ScriptRunConfig exp = Experiment(workspace = ws, name="try-secret") src = ScriptRunConfig(source_directory=".", script="get_secret.py") run = exp.submit(src) run.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.png) Authentication in Azure Machine LearningThis notebook shows you how to authenticate to your Azure ML Workspace using 1. Interactive Login Authentication 2. Azure CLI Authentication 3. Managed Service Identity (MSI) Authentication 4. Service Principal Authentication The interactive authentication is suitable for local experimentation on your own computer. Azure CLI authentication is suitable if you are already using Azure CLI for managing Azure resources, and want to sign in only once. The MSI and Service Principal authentication are suitable for automated workflows, for example as part of Azure Devops build. ###Code from azureml.core import Workspace ###Output _____no_output_____ ###Markdown Interactive AuthenticationInteractive authentication is the default mode when using Azure ML SDK.When you connect to your workspace using workspace.from_config, you will get an interactive login dialog. ###Code ws = Workspace.from_config() ###Output _____no_output_____ ###Markdown Also, if you explicitly specify the subscription ID, resource group and resource group, you will get the dialog. ###Code ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace") ###Output _____no_output_____ ###Markdown Note the user you're authenticated as must have access to the subscription and resource group. If you receive an error```AuthenticationException: You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription. All the subscriptions that you have access to = ...```check that the you used correct login and entered the correct subscription ID. In some cases, you may see a version of the error message containing text: ```All the subscriptions that you have access to = []```In such a case, you may have to specify the tenant ID of the Azure Active Directory you're using. An example would be accessing a subscription as a guest to a tenant that is not your default. You specify the tenant by explicitly instantiating _InteractiveLoginAuthentication_ with Tenant ID as argument. The Tenant ID can be found, for example, from https://portal.azure.com under **Azure Active Directory**, **Properties** as Directory ID. ###Code from azureml.core.authentication import InteractiveLoginAuthentication interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id") ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=interactive_auth) ###Output _____no_output_____ ###Markdown Azure CLI AuthenticationIf you have installed azure-cli package, and used ```az login``` command to log in to your Azure Subscription, you can use _AzureCliAuthentication_ class.Note that interactive authentication described above won't use existing Azure CLI auth tokens. ###Code from azureml.core.authentication import AzureCliAuthentication cli_auth = AzureCliAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=cli_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown MSI Authentication__Note__: _MSI authentication is supported only when using SDK from Azure Virtual Machine. The code below will fail on local computer._When using Azure ML SDK on Azure Virtual Machine (VM), you can use Managed Service Identity (MSI) based authentication. This mode allows the VM connect to the Workspace without storing credentials in the Python code.As a pre-requisite, enable System-assigned Managed Identity for your VM as described in [this document](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm).Then, assign the VM access to your Workspace. For example from Azure Portal, navigate to your workspace, select __Access Control (IAM)__, __Add Role Assignment__, specify __Virtual Machine__ for __Assign Access To__ dropdown, and select your VM's identity.![msi assignment](images/msiaccess.PNG)After completing these steps, you can use authenticate using MsiAuthentication instance. ###Code from azureml.core.authentication import MsiAuthentication msi_auth = MsiAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=msi_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown Service Principal AuthenticationWhen setting up a machine learning workflow as an automated process, we recommend using Service Principal Authentication. This approach decouples the authentication from any specific user login, and allows managed access control.Note that you must have administrator privileges over the Azure subscription to complete these steps.The first step is to create a service principal. First, go to [Azure Portal](https://portal.azure.com), select **Azure Active Directory** and **App Registrations**. Then select **+New application**, give your service principal a name, for example _my-svc-principal_. You can leave other parameters as is.Then click **Register**.![service principal creation](images/svc-pr-1.PNG) From the page for your newly created service principal, copy the _Application ID_ and _Tenant ID_ as they are needed later.![application and tenant id](images/svc-pr-2.PNG) Then select **Certificates & secrets**, and **+New client secret** write a description for your key, and select duration. Then click **Add**, and copy the value of client secret to a secure location.![tenant id](images/svc-pr-3.PNG) Finally, you need to give the service principal permissions to access your workspace. Navigate to **Resource Groups**, to the resource group for your Machine Learning Workspace. Then select **Access Control (IAM)** and **Add a role assignment**. For _Role_, specify which level of access you need to grant, for example _Contributor_. Start entering your service principal name and once it is found, select it, and click **Save**.![add role](images/svc-pr-4.PNG) Now you are ready to use the service principal authentication. For example, to connect to your Workspace, see code below and enter your own values for tenant ID, application ID, subscription ID, resource group and workspace.**We strongly recommended that you do not insert the secret password to code**. Instead, you can use environment variables to pass it to your code, for example through Azure Key Vault, or through secret build variables in Azure DevOps. For local testing, you can for example use following PowerShell command to set the environment variable.```$env:AZUREML_PASSWORD = "my-password"``` ###Code import os from azureml.core.authentication import ServicePrincipalAuthentication svc_pr_password = os.environ.get("AZUREML_PASSWORD") svc_pr = ServicePrincipalAuthentication( tenant_id="my-tenant-id", service_principal_id="my-application-id", service_principal_password=svc_pr_password) ws = Workspace( subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=svc_pr ) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown See [Register an application with the Microsoft identity platform](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app) quickstart for more details about application registrations. Using Secrets in Remote RunsSometimes, you may have to pass a secret to a remote run, for example username and password to authenticate against external data source.Azure ML SDK enables this use case through Key Vault associated with your workspace. The workflow for adding a secret is following.On local computer: 1. Read in a local secret, for example from environment variable or user input. To keep them secret, do not insert secret values into code as hard-coded strings. 2. Obtain a reference to the keyvault 3. Add the secret name-value pair in the key vault. The secret is then available for remote runs as shown further below.__Note__: The _azureml.core.keyvault.Keyvault_ is different from _azure.keyvault_ library. It is intended as simplified wrapper for setting, getting and listing user secrets in Workspace Key Vault. ###Code import os, uuid local_secret = os.environ.get("LOCAL_SECRET", default = str(uuid.uuid4())) # Use random UUID as a substitute for real secret. keyvault = ws.get_default_keyvault() keyvault.set_secret(name="secret-name", value = local_secret) ###Output _____no_output_____ ###Markdown The _set_secret_ method adds a new secret if one doesn't exist, or updates an existing one with new value.You can list secret names you've added. This method doesn't return the values of the secrets. ###Code keyvault.list_secrets() ###Output _____no_output_____ ###Markdown You can retrieve the value of the secret, and validate that it matches the original value. __Note__: This method returns the secret value. Take care not to write the the secret value to output. ###Code retrieved_secret = keyvault.get_secret(name="secret-name") local_secret==retrieved_secret ###Output _____no_output_____ ###Markdown In submitted runs on local and remote compute, you can use the get_secret method of Run instance to get the secret value from Key Vault. The method gives you a simple shortcut: the Run instance is aware of its Workspace and Keyvault, so it can directly obtain the secret without you having to instantiate the Workspace and Keyvault within remote run.__Note__: This method returns the secret value. Take care not to write the secret to output.For example, let's create a simple script _get_secret.py_ that gets the secret we set earlier. In an actual appication, you would use the secret, for example to access a database or other password-protected resource. ###Code %%writefile get_secret.py from azureml.core import Run run = Run.get_context() secret_value = run.get_secret(name="secret-name") print("Got secret value {} , but don't write it out!".format(len(secret_value) * "*")) ###Output _____no_output_____ ###Markdown Then, submit the script as a regular script run, and find the obfuscated secret value in run output. You can use the same approach to other kinds of runs, such as Estimator ones. ###Code from azureml.core import Experiment, Run from azureml.core.script_run_config import ScriptRunConfig exp = Experiment(workspace = ws, name="try-secret") src = ScriptRunConfig(source_directory=".", script="get_secret.py") run = exp.submit(src) run.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.png) Authentication in Azure Machine LearningThis notebook shows you how to authenticate to your Azure ML Workspace using 1. Interactive Login Authentication 2. Azure CLI Authentication 3. Service Principal Authentication The interactive authentication is suitable for local experimentation on your own computer. Azure CLI authentication is suitable if you are already using Azure CLI for managing Azure resources, and want to sign in only once. The Service Principal authentication is suitable for automated workflows, for example as part of Azure Devops build. ###Code from azureml.core import Workspace ###Output _____no_output_____ ###Markdown Interactive AuthenticationInteractive authentication is the default mode when using Azure ML SDK.When you connect to your workspace using workspace.from_config, you will get an interactive login dialog. ###Code ws = Workspace.from_config() ###Output _____no_output_____ ###Markdown Also, if you explicitly specify the subscription ID, resource group and resource group, you will get the dialog. ###Code ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace") ###Output _____no_output_____ ###Markdown Note the user you're authenticated as must have access to the subscription and resource group. If you receive an error```AuthenticationException: You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription. All the subscriptions that you have access to = ...```check that the you used correct login and entered the correct subscription ID. In some cases, you may see a version of the error message containing text: ```All the subscriptions that you have access to = []```In such a case, you may have to specify the tenant ID of the Azure Active Directory you're using. An example would be accessing a subscription as a guest to a tenant that is not your default. You specify the tenant by explicitly instantiating _InteractiveLoginAuthentication_ with Tenant ID as argument. The Tenant ID can be found, for example, from https://portal.azure.com under **Azure Active Directory**, **Properties** as Directory ID. ###Code from azureml.core.authentication import InteractiveLoginAuthentication interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id") ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=interactive_auth) ###Output _____no_output_____ ###Markdown Azure CLI AuthenticationIf you have installed azure-cli package, and used ```az login``` command to log in to your Azure Subscription, you can use _AzureCliAuthentication_ class.Note that interactive authentication described above won't use existing Azure CLI auth tokens. ###Code from azureml.core.authentication import AzureCliAuthentication cli_auth = AzureCliAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=cli_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown Service Principal AuthenticationWhen setting up a machine learning workflow as an automated process, we recommend using Service Principal Authentication. This approach decouples the authentication from any specific user login, and allows managed access control.Note that you must have administrator privileges over the Azure subscription to complete these steps.The first step is to create a service principal. First, go to [Azure Portal](https://portal.azure.com), select **Azure Active Directory** and **App Registrations**. Then select **+New application**, give your service principal a name, for example _my-svc-principal_. You can leave other parameters as is.Then click **Register**.![service principal creation](images/svc-pr-1.PNG) From the page for your newly created service principal, copy the _Application ID_ and _Tenant ID_ as they are needed later.![application and tenant id](images/svc-pr-2.PNG) Then select **Certificates & secrets**, and **+New client secret** write a description for your key, and select duration. Then click **Add**, and copy the value of client secret to a secure location.![tenant id](images/svc-pr-3.PNG) Finally, you need to give the service principal permissions to access your workspace. Navigate to **Resource Groups**, to the resource group for your Machine Learning Workspace. Then select **Access Control (IAM)** and **Add a role assignment**. For _Role_, specify which level of access you need to grant, for example _Contributor_. Start entering your service principal name and once it is found, select it, and click **Save**.![add role](images/svc-pr-4.PNG) Now you are ready to use the service principal authentication. For example, to connect to your Workspace, see code below and enter your own values for tenant ID, application ID, subscription ID, resource group and workspace.**We strongly recommended that you do not insert the secret password to code**. Instead, you can use environment variables to pass it to your code, for example through Azure Key Vault, or through secret build variables in Azure DevOps. For local testing, you can for example use following PowerShell command to set the environment variable.```$env:AZUREML_PASSWORD = "my-password"``` ###Code import os from azureml.core.authentication import ServicePrincipalAuthentication svc_pr_password = os.environ.get("AZUREML_PASSWORD") svc_pr = ServicePrincipalAuthentication( tenant_id="my-tenant-id", service_principal_id="my-application-id", service_principal_password=svc_pr_password) ws = Workspace( subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=svc_pr ) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.png) Authentication in Azure Machine LearningThis notebook shows you how to authenticate to your Azure ML Workspace using 1. Interactive Login Authentication 2. Azure CLI Authentication 3. Managed Service Identity (MSI) Authentication 4. Service Principal Authentication The interactive authentication is suitable for local experimentation on your own computer. Azure CLI authentication is suitable if you are already using Azure CLI for managing Azure resources, and want to sign in only once. The MSI and Service Principal authentication are suitable for automated workflows, for example as part of Azure Devops build. ###Code from azureml.core import Workspace ###Output _____no_output_____ ###Markdown Interactive AuthenticationInteractive authentication is the default mode when using Azure ML SDK.When you connect to your workspace using workspace.from_config, you will get an interactive login dialog. ###Code ws = Workspace.from_config() ###Output _____no_output_____ ###Markdown Also, if you explicitly specify the subscription ID, resource group and workspace name, you will get the dialog. ###Code ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace") ###Output _____no_output_____ ###Markdown Note the user you're authenticated as must have access to the subscription and resource group. If you receive an error```AuthenticationException: You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription. All the subscriptions that you have access to = ...```check that the you used correct login and entered the correct subscription ID. In some cases, you may see a version of the error message containing text: ```All the subscriptions that you have access to = []```In such a case, you may have to specify the tenant ID of the Azure Active Directory you're using. An example would be accessing a subscription as a guest to a tenant that is not your default. You specify the tenant by explicitly instantiating _InteractiveLoginAuthentication_ with Tenant ID as argument. The Tenant ID can be found, for example, from https://portal.azure.com under **Azure Active Directory**, **Properties** as Directory ID. ###Code from azureml.core.authentication import InteractiveLoginAuthentication interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id") ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=interactive_auth) ###Output _____no_output_____ ###Markdown Azure CLI AuthenticationIf you have installed azure-cli package, and used ```az login``` command to log in to your Azure Subscription, you can use _AzureCliAuthentication_ class.Note that interactive authentication described above won't use existing Azure CLI auth tokens. ###Code from azureml.core.authentication import AzureCliAuthentication cli_auth = AzureCliAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=cli_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown MSI Authentication__Note__: _MSI authentication is supported only when using SDK from Azure Virtual Machine. The code below will fail on local computer._When using Azure ML SDK on Azure Virtual Machine (VM), you can use Managed Service Identity (MSI) based authentication. This mode allows the VM connect to the Workspace without storing credentials in the Python code.As a prerequisite, enable System-assigned Managed Identity for your VM as described in [Configure managed identities for Azure resources on a VM using the Azure portal](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm).Then, assign the VM access to your Workspace. For example from Azure Portal, navigate to your workspace, select __Access Control (IAM)__, __Add Role Assignment__, specify __Virtual Machine__ for __Assign Access To__ dropdown, and select your VM's identity.![msi assignment](images/msiaccess.PNG)After completing these steps, you can use authenticate using MsiAuthentication instance. ###Code from azureml.core.authentication import MsiAuthentication msi_auth = MsiAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=msi_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown Service Principal AuthenticationWhen setting up a machine learning workflow as an automated process, we recommend using Service Principal Authentication. This approach decouples the authentication from any specific user login, and allows managed access control.Note that you must have administrator privileges over the Azure subscription to complete these steps.The first step is to create a service principal. First, go to [Azure Portal](https://portal.azure.com), select **Azure Active Directory** and **App Registrations**. Then select **+New application**, give your service principal a name, for example _my-svc-principal_. You can leave other parameters as is.Then click **Register**.![service principal creation](images/svc-pr-1.PNG) From the page for your newly created service principal, copy the _Application ID_ and _Tenant ID_ as they are needed later.![application and tenant id](images/svc-pr-2.PNG) Then select **Certificates & secrets**, and **+New client secret** write a description for your key, and select duration. Then click **Add**, and copy the value of client secret to a secure location.![tenant id](images/svc-pr-3.PNG) Finally, you need to give the service principal permissions to access your workspace. Navigate to **Resource Groups**, to the resource group for your Machine Learning Workspace. Then select **Access Control (IAM)** and **Add a role assignment**. For _Role_, specify which level of access you need to grant, for example _Contributor_. Start entering your service principal name and once it is found, select it, and click **Save**.![add role](images/svc-pr-4.PNG) Now you are ready to use the service principal authentication. For example, to connect to your Workspace, see code below and enter your own values for tenant ID, application ID, subscription ID, resource group and workspace.**We strongly recommended that you do not insert the secret password to code**. Instead, you can use environment variables to pass it to your code, for example through Azure Key Vault, or through secret build variables in Azure DevOps. For local testing, you can for example use following PowerShell command to set the environment variable.```$env:AZUREML_PASSWORD = "my-password"``` ###Code import os from azureml.core.authentication import ServicePrincipalAuthentication svc_pr_password = os.environ.get("AZUREML_PASSWORD") svc_pr = ServicePrincipalAuthentication( tenant_id="my-tenant-id", service_principal_id="my-application-id", service_principal_password=svc_pr_password) ws = Workspace( subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=svc_pr ) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown See [Register an application with the Microsoft identity platform](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app) quickstart for more details about application registrations. Using Secrets in Remote RunsSometimes, you may have to pass a secret to a remote run, for example username and password to authenticate against external data source.Azure ML SDK enables this use case through Key Vault associated with your workspace. The workflow for adding a secret is following.On local computer: 1. Read in a local secret, for example from environment variable or user input. To keep them secret, do not insert secret values into code as hard-coded strings. 2. Obtain a reference to the keyvault 3. Add the secret name-value pair in the key vault. The secret is then available for remote runs as shown further below.__Note__: The _azureml.core.keyvault.Keyvault_ is different from _azure.keyvault_ library. It is intended as simplified wrapper for setting, getting and listing user secrets in Workspace Key Vault. ###Code import os, uuid local_secret = os.environ.get("LOCAL_SECRET", default = str(uuid.uuid4())) # Use random UUID as a substitute for real secret. keyvault = ws.get_default_keyvault() keyvault.set_secret(name="secret-name", value = local_secret) ###Output _____no_output_____ ###Markdown The _set_secret_ method adds a new secret if one doesn't exist, or updates an existing one with new value.You can list secret names you've added. This method doesn't return the values of the secrets. ###Code keyvault.list_secrets() ###Output _____no_output_____ ###Markdown You can retrieve the value of the secret, and validate that it matches the original value. __Note__: This method returns the secret value. Take care not to write the the secret value to output. ###Code retrieved_secret = keyvault.get_secret(name="secret-name") local_secret==retrieved_secret ###Output _____no_output_____ ###Markdown In submitted runs on local and remote compute, you can use the get_secret method of Run instance to get the secret value from Key Vault. The method gives you a simple shortcut: the Run instance is aware of its Workspace and Keyvault, so it can directly obtain the secret without you having to instantiate the Workspace and Keyvault within remote run.__Note__: This method returns the secret value. Take care not to write the secret to output.For example, let's create a simple script _get_secret.py_ that gets the secret we set earlier. In an actual appication, you would use the secret, for example to access a database or other password-protected resource. ###Code %%writefile get_secret.py from azureml.core import Run run = Run.get_context() secret_value = run.get_secret(name="secret-name") print("Got secret value {} , but don't write it out!".format(len(secret_value) * "*")) ###Output _____no_output_____ ###Markdown Then, submit the script as a regular script run, and find the obfuscated secret value in run output. You can use the same approach to other kinds of runs, such as Estimator ones. ###Code from azureml.core import Experiment, Run from azureml.core.script_run_config import ScriptRunConfig exp = Experiment(workspace = ws, name="try-secret") src = ScriptRunConfig(source_directory=".", script="get_secret.py") run = exp.submit(src) run.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.png) Authentication in Azure Machine LearningThis notebook shows you how to authenticate to your Azure ML Workspace using 1. Interactive Login Authentication 2. Azure CLI Authentication 3. Managed Service Identity (MSI) Authentication 4. Service Principal Authentication The interactive authentication is suitable for local experimentation on your own computer. Azure CLI authentication is suitable if you are already using Azure CLI for managing Azure resources, and want to sign in only once. The MSI and Service Principal authentication are suitable for automated workflows, for example as part of Azure Devops build. ###Code from azureml.core import Workspace ###Output _____no_output_____ ###Markdown Interactive AuthenticationInteractive authentication is the default mode when using Azure ML SDK.When you connect to your workspace using workspace.from_config, you will get an interactive login dialog. ###Code ws = Workspace.from_config() ###Output _____no_output_____ ###Markdown Also, if you explicitly specify the subscription ID, resource group and workspace name, you will get the dialog. ###Code ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace") ###Output _____no_output_____ ###Markdown Note the user you're authenticated as must have access to the subscription and resource group. If you receive an error```AuthenticationException: You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription. All the subscriptions that you have access to = ...```check that the you used correct login and entered the correct subscription ID. In some cases, you may see a version of the error message containing text: ```All the subscriptions that you have access to = []```In such a case, you may have to specify the tenant ID of the Azure Active Directory you're using. An example would be accessing a subscription as a guest to a tenant that is not your default. You specify the tenant by explicitly instantiating _InteractiveLoginAuthentication_ with Tenant ID as argument. The Tenant ID can be found, for example, from https://portal.azure.com under **Azure Active Directory**, **Properties** as Directory ID. ###Code from azureml.core.authentication import InteractiveLoginAuthentication interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id") ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=interactive_auth) ###Output _____no_output_____ ###Markdown Azure CLI AuthenticationIf you have installed azure-cli package, and used ```az login``` command to log in to your Azure Subscription, you can use _AzureCliAuthentication_ class.Note that interactive authentication described above won't use existing Azure CLI auth tokens. ###Code from azureml.core.authentication import AzureCliAuthentication cli_auth = AzureCliAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=cli_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown MSI Authentication__Note__: _MSI authentication is supported only when using SDK from Azure Virtual Machine. The code below will fail on local computer._When using Azure ML SDK on Azure Virtual Machine (VM), you can use Managed Service Identity (MSI) based authentication. This mode allows the VM connect to the Workspace without storing credentials in the Python code.As a prerequisite, enable System-assigned Managed Identity for your VM as described in [Configure managed identities for Azure resources on a VM using the Azure portal](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm).Then, assign the VM access to your Workspace. For example from Azure Portal, navigate to your workspace, select __Access Control (IAM)__, __Add Role Assignment__, specify __Virtual Machine__ for __Assign Access To__ dropdown, and select your VM's identity.![msi assignment](images/msiaccess.PNG)After completing these steps, you can use authenticate using MsiAuthentication instance. ###Code from azureml.core.authentication import MsiAuthentication msi_auth = MsiAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=msi_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown Service Principal AuthenticationWhen setting up a machine learning workflow as an automated process, we recommend using Service Principal Authentication. This approach decouples the authentication from any specific user login, and allows managed access control.Note that you must have administrator privileges over the Azure subscription to complete these steps.The first step is to create a service principal. First, go to [Azure Portal](https://portal.azure.com), select **Azure Active Directory** and **App Registrations**. Then select **+New application**, give your service principal a name, for example _my-svc-principal_. You can leave other parameters as is.Then click **Register**.![service principal creation](images/svc-pr-1.PNG) From the page for your newly created service principal, copy the _Application ID_ and _Tenant ID_ as they are needed later.![application and tenant id](images/svc-pr-2.PNG) Then select **Certificates & secrets**, and **+New client secret** write a description for your key, and select duration. Then click **Add**, and copy the value of client secret to a secure location.![tenant id](images/svc-pr-3.PNG) Finally, you need to give the service principal permissions to access your workspace. Navigate to **Resource Groups**, to the resource group for your Machine Learning Workspace. Then select **Access Control (IAM)** and **Add a role assignment**. For _Role_, specify which level of access you need to grant, for example _Contributor_. Start entering your service principal name and once it is found, select it, and click **Save**.![add role](images/svc-pr-4.PNG) Now you are ready to use the service principal authentication. For example, to connect to your Workspace, see code below and enter your own values for tenant ID, application ID, subscription ID, resource group and workspace.**We strongly recommended that you do not insert the secret password to code**. Instead, you can use environment variables to pass it to your code, for example through Azure Key Vault, or through secret build variables in Azure DevOps. For local testing, you can for example use following PowerShell command to set the environment variable.```$env:AZUREML_PASSWORD = "my-password"``` ###Code import os from azureml.core.authentication import ServicePrincipalAuthentication svc_pr_password = os.environ.get("AZUREML_PASSWORD") svc_pr = ServicePrincipalAuthentication( tenant_id="my-tenant-id", service_principal_id="my-application-id", service_principal_password=svc_pr_password) ws = Workspace( subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=svc_pr ) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown See [Register an application with the Microsoft identity platform](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app) quickstart for more details about application registrations. Using Secrets in Remote RunsSometimes, you may have to pass a secret to a remote run, for example username and password to authenticate against external data source.Azure ML SDK enables this use case through Key Vault associated with your workspace. The workflow for adding a secret is following.On local computer: 1. Read in a local secret, for example from environment variable or user input. To keep them secret, do not insert secret values into code as hard-coded strings. 2. Obtain a reference to the keyvault 3. Add the secret name-value pair in the key vault. The secret is then available for remote runs as shown further below.__Note__: The _azureml.core.keyvault.Keyvault_ is different from _azure.keyvault_ library. It is intended as simplified wrapper for setting, getting and listing user secrets in Workspace Key Vault. ###Code import os, uuid local_secret = os.environ.get("LOCAL_SECRET", default = str(uuid.uuid4())) # Use random UUID as a substitute for real secret. keyvault = ws.get_default_keyvault() keyvault.set_secret(name="secret-name", value = local_secret) ###Output _____no_output_____ ###Markdown The _set_secret_ method adds a new secret if one doesn't exist, or updates an existing one with new value.You can list secret names you've added. This method doesn't return the values of the secrets. ###Code keyvault.list_secrets() ###Output _____no_output_____ ###Markdown You can retrieve the value of the secret, and validate that it matches the original value. __Note__: This method returns the secret value. Take care not to write the the secret value to output. ###Code retrieved_secret = keyvault.get_secret(name="secret-name") local_secret==retrieved_secret ###Output _____no_output_____ ###Markdown In submitted runs on local and remote compute, you can use the get_secret method of Run instance to get the secret value from Key Vault. The method gives you a simple shortcut: the Run instance is aware of its Workspace and Keyvault, so it can directly obtain the secret without you having to instantiate the Workspace and Keyvault within remote run.__Note__: This method returns the secret value. Take care not to write the secret to output.For example, let's create a simple script _get_secret.py_ that gets the secret we set earlier. In an actual appication, you would use the secret, for example to access a database or other password-protected resource. ###Code %%writefile get_secret.py from azureml.core import Run run = Run.get_context() secret_value = run.get_secret(name="secret-name") print("Got secret value {} , but don't write it out!".format(len(secret_value) * "*")) ###Output _____no_output_____ ###Markdown Then, submit the script as a regular script run, and find the obfuscated secret value in run output. You can use the same approach to other kinds of runs, such as Estimator ones. ###Code from azureml.core import Experiment, Run from azureml.core.script_run_config import ScriptRunConfig exp = Experiment(workspace = ws, name="try-secret") src = ScriptRunConfig(source_directory=".", script="get_secret.py") run = exp.submit(src) run.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.png) Authentication in Azure Machine LearningThis notebook shows you how to authenticate to your Azure ML Workspace using 1. Interactive Login Authentication 2. Azure CLI Authentication 3. Managed Service Identity (MSI) Authentication 4. Service Principal Authentication The interactive authentication is suitable for local experimentation on your own computer. Azure CLI authentication is suitable if you are already using Azure CLI for managing Azure resources, and want to sign in only once. The MSI and Service Principal authentication are suitable for automated workflows, for example as part of Azure Devops build. ###Code from azureml.core import Workspace ###Output _____no_output_____ ###Markdown Interactive AuthenticationInteractive authentication is the default mode when using Azure ML SDK.When you connect to your workspace using workspace.from_config, you will get an interactive login dialog. ###Code ws = Workspace.from_config() ###Output _____no_output_____ ###Markdown Also, if you explicitly specify the subscription ID, resource group and workspace name, you will get the dialog. ###Code ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace") ###Output _____no_output_____ ###Markdown Note the user you're authenticated as must have access to the subscription and resource group. If you receive an error```AuthenticationException: You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription. All the subscriptions that you have access to = ...```check that the you used correct login and entered the correct subscription ID. In some cases, you may see a version of the error message containing text: ```All the subscriptions that you have access to = []```In such a case, you may have to specify the tenant ID of the Azure Active Directory you're using. An example would be accessing a subscription as a guest to a tenant that is not your default. You specify the tenant by explicitly instantiating _InteractiveLoginAuthentication_ with Tenant ID as argument. The Tenant ID can be found, for example, from https://portal.azure.com under **Azure Active Directory**, **Properties** as Directory ID. ###Code from azureml.core.authentication import InteractiveLoginAuthentication interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id") ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=interactive_auth) ###Output _____no_output_____ ###Markdown Despite having access to the workspace, you may sometimes see the following error when retrieving it:```You are currently logged-in to xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx tenant. You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription, please check if it is in this tenant.```This error sometimes occurs when you are trying to access a subscription to which you were recently added. In this case, you need to force authentication again to avoid using a cached authentication token that has not picked up the new permissions. You can do so by setting `force=true` on the `InteractiveLoginAuthentication()` object's constructor as follows: ###Code forced_interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id", force=True) ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=forced_interactive_auth) ###Output _____no_output_____ ###Markdown Azure CLI AuthenticationIf you have installed azure-cli package, and used ```az login``` command to log in to your Azure Subscription, you can use _AzureCliAuthentication_ class.Note that interactive authentication described above won't use existing Azure CLI auth tokens. ###Code from azureml.core.authentication import AzureCliAuthentication cli_auth = AzureCliAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=cli_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown MSI Authentication__Note__: _MSI authentication is supported only when using SDK from Azure Virtual Machine. The code below will fail on local computer._When using Azure ML SDK on Azure Virtual Machine (VM), you can use Managed Service Identity (MSI) based authentication. This mode allows the VM connect to the Workspace without storing credentials in the Python code.As a prerequisite, enable System-assigned Managed Identity for your VM as described in [Configure managed identities for Azure resources on a VM using the Azure portal](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm).Then, assign the VM access to your Workspace. For example from Azure Portal, navigate to your workspace, select __Access Control (IAM)__, __Add Role Assignment__, specify __Virtual Machine__ for __Assign Access To__ dropdown, and select your VM's identity.![msi assignment](images/msiaccess.PNG)After completing these steps, you can use authenticate using MsiAuthentication instance. ###Code from azureml.core.authentication import MsiAuthentication msi_auth = MsiAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=msi_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown Service Principal AuthenticationWhen setting up a machine learning workflow as an automated process, we recommend using Service Principal Authentication. This approach decouples the authentication from any specific user login, and allows managed access control.Note that you must have administrator privileges over the Azure subscription to complete these steps.The first step is to create a service principal. First, go to [Azure Portal](https://portal.azure.com), select **Azure Active Directory** and **App Registrations**. Then select **+New application**, give your service principal a name, for example _my-svc-principal_. You can leave other parameters as is.Then click **Register**.![service principal creation](images/svc-pr-1.PNG) From the page for your newly created service principal, copy the _Application ID_ and _Tenant ID_ as they are needed later.![application and tenant id](images/svc-pr-2.PNG) Then select **Certificates & secrets**, and **+New client secret** write a description for your key, and select duration. Then click **Add**, and copy the value of client secret to a secure location.![tenant id](images/svc-pr-3.PNG) Finally, you need to give the service principal permissions to access your workspace. Navigate to **Resource Groups**, to the resource group for your Machine Learning Workspace. Then select **Access Control (IAM)** and **Add a role assignment**. For _Role_, specify which level of access you need to grant, for example _Contributor_. Start entering your service principal name and once it is found, select it, and click **Save**.![add role](images/svc-pr-4.PNG) Now you are ready to use the service principal authentication. For example, to connect to your Workspace, see code below and enter your own values for tenant ID, application ID, subscription ID, resource group and workspace.**We strongly recommended that you do not insert the secret password to code**. Instead, you can use environment variables to pass it to your code, for example through Azure Key Vault, or through secret build variables in Azure DevOps. For local testing, you can for example use following PowerShell command to set the environment variable.```$env:AZUREML_PASSWORD = "my-password"``` ###Code import os from azureml.core.authentication import ServicePrincipalAuthentication svc_pr_password = os.environ.get("AZUREML_PASSWORD") svc_pr = ServicePrincipalAuthentication( tenant_id="my-tenant-id", service_principal_id="my-application-id", service_principal_password=svc_pr_password) ws = Workspace( subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=svc_pr ) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown See [Register an application with the Microsoft identity platform](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app) quickstart for more details about application registrations. Using Secrets in Remote RunsSometimes, you may have to pass a secret to a remote run, for example username and password to authenticate against external data source.Azure ML SDK enables this use case through Key Vault associated with your workspace. The workflow for adding a secret is following.On local computer: 1. Read in a local secret, for example from environment variable or user input. To keep them secret, do not insert secret values into code as hard-coded strings. 2. Obtain a reference to the keyvault 3. Add the secret name-value pair in the key vault. The secret is then available for remote runs as shown further below.__Note__: The _azureml.core.keyvault.Keyvault_ is different from _azure.keyvault_ library. It is intended as simplified wrapper for setting, getting and listing user secrets in Workspace Key Vault. ###Code import os, uuid local_secret = os.environ.get("LOCAL_SECRET", default = str(uuid.uuid4())) # Use random UUID as a substitute for real secret. keyvault = ws.get_default_keyvault() keyvault.set_secret(name="secret-name", value = local_secret) ###Output _____no_output_____ ###Markdown The _set_secret_ method adds a new secret if one doesn't exist, or updates an existing one with new value.You can list secret names you've added. This method doesn't return the values of the secrets. ###Code keyvault.list_secrets() ###Output _____no_output_____ ###Markdown You can retrieve the value of the secret, and validate that it matches the original value. __Note__: This method returns the secret value. Take care not to write the the secret value to output. ###Code retrieved_secret = keyvault.get_secret(name="secret-name") local_secret==retrieved_secret ###Output _____no_output_____ ###Markdown In submitted runs on local and remote compute, you can use the get_secret method of Run instance to get the secret value from Key Vault. The method gives you a simple shortcut: the Run instance is aware of its Workspace and Keyvault, so it can directly obtain the secret without you having to instantiate the Workspace and Keyvault within remote run.__Note__: This method returns the secret value. Take care not to write the secret to output.For example, let's create a simple script _get_secret.py_ that gets the secret we set earlier. In an actual appication, you would use the secret, for example to access a database or other password-protected resource. ###Code %%writefile get_secret.py from azureml.core import Run run = Run.get_context() secret_value = run.get_secret(name="secret-name") print("Got secret value {} , but don't write it out!".format(len(secret_value) * "*")) ###Output _____no_output_____ ###Markdown Then, submit the script as a regular script run, and find the obfuscated secret value in run output. You can use the same approach to other kinds of runs, such as Estimator ones. ###Code from azureml.core import Experiment from azureml.core.script_run_config import ScriptRunConfig exp = Experiment(workspace = ws, name="try-secret") src = ScriptRunConfig(source_directory=".", script="get_secret.py") run = exp.submit(src) run.wait_for_completion(show_output=True) ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Authentication in Azure Machine LearningThis notebook shows you how to authenticate to your Azure ML Workspace using 1. Interactive Login Authentication 2. Azure CLI Authentication 3. Service Principal Authentication The interactive authentication is suitable for local experimentation on your own computer. Azure CLI authentication is suitable if you are already using Azure CLI for managing Azure resources, and want to sign in only once. The Service Principal authentication is suitable for automated workflows, for example as part of Azure Devops build. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.png) ###Code from azureml.core import Workspace ###Output _____no_output_____ ###Markdown Interactive AuthenticationInteractive authentication is the default mode when using Azure ML SDK.When you connect to your workspace using workspace.from_config, you will get an interactive login dialog. ###Code ws = Workspace.from_config() ###Output _____no_output_____ ###Markdown Also, if you explicitly specify the subscription ID, resource group and resource group, you will get the dialog. ###Code ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace") ###Output _____no_output_____ ###Markdown Note the user you're authenticated as must have access to the subscription and resource group. If you receive an error```AuthenticationException: You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription. All the subscriptions that you have access to = ...```check that the you used correct login and entered the correct subscription ID. In some cases, you may see a version of the error message containing text: ```All the subscriptions that you have access to = []```In such a case, you may have to specify the tenant ID of the Azure Active Directory you're using. An example would be accessing a subscription as a guest to a tenant that is not your default. You specify the tenant by explicitly instantiating _InteractiveLoginAuthentication_ with tenant ID as argument ([see instructions how to obtain tenant Id](get-tenant-id)). ###Code from azureml.core.authentication import InteractiveLoginAuthentication interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id") ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=interactive_auth) ###Output _____no_output_____ ###Markdown Azure CLI AuthenticationIf you have installed azure-cli package, and used ```az login``` command to log in to your Azure Subscription, you can use _AzureCliAuthentication_ class.Note that interactive authentication described above won't use existing Azure CLI auth tokens. ###Code from azureml.core.authentication import AzureCliAuthentication cli_auth = AzureCliAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=cli_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown Service Principal AuthenticationWhen setting up a machine learning workflow as an automated process, we recommend using Service Principal Authentication. This approach decouples the authentication from any specific user login, and allows managed access control.Note that you must have administrator privileges over the Azure subscription to complete these steps.The first step is to create a service principal. First, go to [Azure Portal](https://portal.azure.com), select **Azure Active Directory** and **App Registrations**. Then select **+New application registration**, give your service principal a name, for example _my-svc-principal_. You can leave application type as is, and specify a dummy value for Sign-on URL, such as _https://invalid_.Then click **Create**.![service principal creation] The next step is to obtain the _Application ID_ (also called username) and create _password_ for the service principal.From the page for your newly created service principal, copy the _Application ID_. Then select **Settings** and **Keys**, write a description for your key, and select duration. Then click **Save**, and copy the _password_ to a secure location.![application id and password](images/svc-pr-2.PNG) Also, you need to obtain the tenant ID of your Azure subscription. Go back to **Azure Active Directory**, select **Properties** and copy _Directory ID_.![tenant id](images/svc-pr-3.PNG) Finally, you need to give the service principal permissions to access your workspace. Navigate to **Resource Groups**, to the resource group for your Machine Learning Workspace. Then select **Access Control (IAM)** and **Add a role assignment**. For _Role_, specify which level of access you need to grant, for example _Contributor_. Start entering your service principal name and once it is found, select it, and click **Save**.![add role](images/svc-pr-4.PNG) Now you are ready to use the service principal authentication. For example, to connect to your Workspace, see code below and enter your own values for tenant ID, application ID, subscription ID, resource group and workspace.**We strongly recommended that you do not insert the secret password to code**. Instead, you can use environment variables to pass it to your code, for example through Azure Key Vault, or through secret build variables in Azure DevOps. For local testing, you can for example use following PowerShell command to set the environment variable.```$env:AZUREML_PASSWORD = "my-password"``` ###Code import os from azureml.core.authentication import ServicePrincipalAuthentication svc_pr_password = os.environ.get("AZUREML_PASSWORD") svc_pr = ServicePrincipalAuthentication( tenant_id="my-tenant-id", service_principal_id="my-application-id", service_principal_password=svc_pr_password) ws = Workspace( subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=svc_pr ) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azureml.png) Authentication in Azure Machine LearningThis notebook shows you how to authenticate to your Azure ML Workspace using 1. Interactive Login Authentication 2. Azure CLI Authentication 3. Managed Service Identity (MSI) Authentication 4. Service Principal Authentication The interactive authentication is suitable for local experimentation on your own computer. Azure CLI authentication is suitable if you are already using Azure CLI for managing Azure resources, and want to sign in only once. The MSI and Service Principal authentication are suitable for automated workflows, for example as part of Azure Devops build. ###Code from azureml.core import Workspace ###Output _____no_output_____ ###Markdown Interactive AuthenticationInteractive authentication is the default mode when using Azure ML SDK.When you connect to your workspace using workspace.from_config, you will get an interactive login dialog. ###Code ws = Workspace.from_config() ###Output _____no_output_____ ###Markdown Also, if you explicitly specify the subscription ID, resource group and workspace name, you will get the dialog. ###Code ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace") ###Output _____no_output_____ ###Markdown Note the user you're authenticated as must have access to the subscription and resource group. If you receive an error```AuthenticationException: You don't have access to xxxxxx-xxxx-xxx-xxx-xxxxxxxxxx subscription. All the subscriptions that you have access to = ...```check that the you used correct login and entered the correct subscription ID. In some cases, you may see a version of the error message containing text: ```All the subscriptions that you have access to = []```In such a case, you may have to specify the tenant ID of the Azure Active Directory you're using. An example would be accessing a subscription as a guest to a tenant that is not your default. You specify the tenant by explicitly instantiating _InteractiveLoginAuthentication_ with Tenant ID as argument. The Tenant ID can be found, for example, from https://portal.azure.com under **Azure Active Directory**, **Properties** as Directory ID. ###Code from azureml.core.authentication import InteractiveLoginAuthentication interactive_auth = InteractiveLoginAuthentication(tenant_id="my-tenant-id") ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=interactive_auth) ###Output _____no_output_____ ###Markdown Azure CLI AuthenticationIf you have installed azure-cli package, and used ```az login``` command to log in to your Azure Subscription, you can use _AzureCliAuthentication_ class.Note that interactive authentication described above won't use existing Azure CLI auth tokens. ###Code from azureml.core.authentication import AzureCliAuthentication cli_auth = AzureCliAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=cli_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown MSI Authentication__Note__: _MSI authentication is supported only when using SDK from Azure Virtual Machine. The code below will fail on local computer._When using Azure ML SDK on Azure Virtual Machine (VM), you can use Managed Service Identity (MSI) based authentication. This mode allows the VM connect to the Workspace without storing credentials in the Python code.As a prerequisite, enable System-assigned Managed Identity for your VM as described in [Configure managed identities for Azure resources on a VM using the Azure portal](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm).Then, assign the VM access to your Workspace. For example from Azure Portal, navigate to your workspace, select __Access Control (IAM)__, __Add Role Assignment__, specify __Virtual Machine__ for __Assign Access To__ dropdown, and select your VM's identity.![msi assignment](images/msiaccess.PNG)After completing these steps, you can use authenticate using MsiAuthentication instance. ###Code from azureml.core.authentication import MsiAuthentication msi_auth = MsiAuthentication() ws = Workspace(subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=msi_auth) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown Service Principal AuthenticationWhen setting up a machine learning workflow as an automated process, we recommend using Service Principal Authentication. This approach decouples the authentication from any specific user login, and allows managed access control.Note that you must have administrator privileges over the Azure subscription to complete these steps.The first step is to create a service principal. First, go to [Azure Portal](https://portal.azure.com), select **Azure Active Directory** and **App Registrations**. Then select **+New application**, give your service principal a name, for example _my-svc-principal_. You can leave other parameters as is.Then click **Register**.![service principal creation](images/svc-pr-1.PNG) From the page for your newly created service principal, copy the _Application ID_ and _Tenant ID_ as they are needed later.![application and tenant id](images/svc-pr-2.PNG) Then select **Certificates & secrets**, and **+New client secret** write a description for your key, and select duration. Then click **Add**, and copy the value of client secret to a secure location.![tenant id](images/svc-pr-3.PNG) Finally, you need to give the service principal permissions to access your workspace. Navigate to **Resource Groups**, to the resource group for your Machine Learning Workspace. Then select **Access Control (IAM)** and **Add a role assignment**. For _Role_, specify which level of access you need to grant, for example _Contributor_. Start entering your service principal name and once it is found, select it, and click **Save**.![add role](images/svc-pr-4.PNG) Now you are ready to use the service principal authentication. For example, to connect to your Workspace, see code below and enter your own values for tenant ID, application ID, subscription ID, resource group and workspace.**We strongly recommended that you do not insert the secret password to code**. Instead, you can use environment variables to pass it to your code, for example through Azure Key Vault, or through secret build variables in Azure DevOps. For local testing, you can for example use following PowerShell command to set the environment variable.```$env:AZUREML_PASSWORD = "my-password"``` ###Code import os from azureml.core.authentication import ServicePrincipalAuthentication svc_pr_password = os.environ.get("AZUREML_PASSWORD") svc_pr = ServicePrincipalAuthentication( tenant_id="my-tenant-id", service_principal_id="my-application-id", service_principal_password=svc_pr_password) ws = Workspace( subscription_id="my-subscription-id", resource_group="my-ml-rg", workspace_name="my-ml-workspace", auth=svc_pr ) print("Found workspace {} at location {}".format(ws.name, ws.location)) ###Output _____no_output_____ ###Markdown See [Register an application with the Microsoft identity platform](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app) quickstart for more details about application registrations. Using Secrets in Remote RunsSometimes, you may have to pass a secret to a remote run, for example username and password to authenticate against external data source.Azure ML SDK enables this use case through Key Vault associated with your workspace. The workflow for adding a secret is following.On local computer: 1. Read in a local secret, for example from environment variable or user input. To keep them secret, do not insert secret values into code as hard-coded strings. 2. Obtain a reference to the keyvault 3. Add the secret name-value pair in the key vault. The secret is then available for remote runs as shown further below.__Note__: The _azureml.core.keyvault.Keyvault_ is different from _azure.keyvault_ library. It is intended as simplified wrapper for setting, getting and listing user secrets in Workspace Key Vault. ###Code import os, uuid local_secret = os.environ.get("LOCAL_SECRET", default = str(uuid.uuid4())) # Use random UUID as a substitute for real secret. keyvault = ws.get_default_keyvault() keyvault.set_secret(name="secret-name", value = local_secret) ###Output _____no_output_____ ###Markdown The _set_secret_ method adds a new secret if one doesn't exist, or updates an existing one with new value.You can list secret names you've added. This method doesn't return the values of the secrets. ###Code keyvault.list_secrets() ###Output _____no_output_____ ###Markdown You can retrieve the value of the secret, and validate that it matches the original value. __Note__: This method returns the secret value. Take care not to write the the secret value to output. ###Code retrieved_secret = keyvault.get_secret(name="secret-name") local_secret==retrieved_secret ###Output _____no_output_____ ###Markdown In submitted runs on local and remote compute, you can use the get_secret method of Run instance to get the secret value from Key Vault. The method gives you a simple shortcut: the Run instance is aware of its Workspace and Keyvault, so it can directly obtain the secret without you having to instantiate the Workspace and Keyvault within remote run.__Note__: This method returns the secret value. Take care not to write the secret to output.For example, let's create a simple script _get_secret.py_ that gets the secret we set earlier. In an actual appication, you would use the secret, for example to access a database or other password-protected resource. ###Code %%writefile get_secret.py from azureml.core import Run run = Run.get_context() secret_value = run.get_secret(name="secret-name") print("Got secret value {} , but don't write it out!".format(len(secret_value) * "*")) ###Output _____no_output_____ ###Markdown Then, submit the script as a regular script run, and find the obfuscated secret value in run output. You can use the same approach to other kinds of runs, such as Estimator ones. ###Code from azureml.core import Experiment, Run from azureml.core.script_run_config import ScriptRunConfig exp = Experiment(workspace = ws, name="try-secret") src = ScriptRunConfig(source_directory=".", script="get_secret.py") run = exp.submit(src) run.wait_for_completion(show_output=True) ###Output _____no_output_____
notebooks/Image Classifier Project.ipynb
###Markdown Developing an AI applicationGoing forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications. In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below. The project is broken down into multiple steps:* Load and preprocess the image dataset* Train the image classifier on your dataset* Use the trained classifier to predict image contentWe'll lead you through each part which you'll implement in Python.When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here. ###Code # Imports here import matplotlib.pyplot as plt import numpy as np import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms, models device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') ###Output _____no_output_____ ###Markdown Load the dataHere you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks.The validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size.The pre-trained networks you'll use were trained on the ImageNet dataset where each color channel was normalized separately. For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1. ###Code data_dir = 'flowers' train_dir = data_dir + '/train' valid_dir = data_dir + '/valid' test_dir = data_dir + '/test' # TODO: Define your transforms for the training, validation, and testing sets train_transforms = transforms.Compose([ transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) valid_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) test_transforms = transforms.Compose([ transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) # TODO: Load the datasets with ImageFolder train_data = datasets.ImageFolder(train_dir, transform=train_transforms) valid_data = datasets.ImageFolder(valid_dir, transform=valid_transforms) test_data = datasets.ImageFolder(test_dir, transform=test_transforms) # TODO: Using the image datasets and the trainforms, define the dataloaders trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True) validloader = torch.utils.data.DataLoader(valid_data, batch_size=64) testloader = torch.utils.data.DataLoader(test_data, batch_size=64) ###Output _____no_output_____ ###Markdown Label mappingYou'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers. ###Code import json with open('cat_to_name.json', 'r') as f: cat_to_name = json.load(f) ###Output _____no_output_____ ###Markdown Building and training the classifierNow that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features.We're going to leave this part up to you. Refer to [the rubric](https://review.udacity.com/!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do:* Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use)* Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout* Train the classifier layers using backpropagation using the pre-trained network to get the features* Track the loss and accuracy on the validation set to determine the best hyperparametersWe've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project.One last important tip if you're using the workspace to run your code: To avoid having your workspace disconnect during the long-running tasks in this notebook, please read in the earlier page in this lesson called Intro toGPU Workspaces about Keeping Your Session Active. You'll want to include code from the workspace_utils.py module.**Note for Workspace users:** If your network is over 1 GB when saved as a checkpoint, there might be issues with saving backups in your workspace. Typically this happens with wide dense layers after the convolutional layers. If your saved checkpoint is larger than 1 GB (you can open a terminal and check with `ls -lh`), you should reduce the size of your hidden layers and train again. ###Code # TODO: Build and train your network model = models.densenet121(pretrained=True) # Fixing parameters for param in model.parameters(): param.requires_grad = False classifier = nn.Sequential( nn.Linear(1024, 256), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(256, 102), nn.LogSoftmax(dim=1) ) model.classifier = classifier model.to(device) criterion = nn.NLLLoss() optimizer = optim.Adam(model.classifier.parameters(), lr=0.003) %%time epochs = 10 steps = 0 running_loss = 0 print_every = 5 for epoch in range(epochs): for inputs, labels in trainloader: steps += 1 inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() logps = model.forward(inputs) loss = criterion(logps, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: valid_loss = 0 accuracy = 0 model.eval() with torch.no_grad(): for inputs, labels in validloader: inputs, labels = inputs.to(device), labels.to(device) logps = model.forward(inputs) batch_loss = criterion(logps, labels) valid_loss += batch_loss.item() ps = torch.exp(logps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)).item() print(f'Epoch {epoch + 1}/{epochs}... ' f'Train loss: {running_loss/print_every:.3f}... ' f'Validation loss: {valid_loss/len(validloader):.3f}... ' f'Validation accuracy: {accuracy/len(validloader):.3f}... ') running_loss = 0 model.train() ###Output Epoch 1/10... Train loss: 4.600... Validation loss: 4.511... Validation accuracy: 0.035... Epoch 1/10... Train loss: 4.499... Validation loss: 4.295... Validation accuracy: 0.104... Epoch 1/10... Train loss: 4.297... Validation loss: 4.105... Validation accuracy: 0.112... Epoch 1/10... Train loss: 4.068... Validation loss: 3.909... Validation accuracy: 0.160... Epoch 1/10... Train loss: 3.891... Validation loss: 3.636... Validation accuracy: 0.233... Epoch 1/10... Train loss: 3.786... Validation loss: 3.394... Validation accuracy: 0.283... Epoch 1/10... Train loss: 3.594... Validation loss: 3.208... Validation accuracy: 0.332... Epoch 1/10... Train loss: 3.244... Validation loss: 2.910... Validation accuracy: 0.331... Epoch 1/10... Train loss: 3.102... Validation loss: 2.565... Validation accuracy: 0.464... Epoch 1/10... Train loss: 2.826... Validation loss: 2.377... Validation accuracy: 0.469... Epoch 1/10... Train loss: 2.691... Validation loss: 2.226... Validation accuracy: 0.503... Epoch 1/10... Train loss: 2.523... Validation loss: 2.062... Validation accuracy: 0.560... Epoch 1/10... Train loss: 2.368... Validation loss: 1.883... Validation accuracy: 0.575... Epoch 1/10... Train loss: 2.374... Validation loss: 1.773... Validation accuracy: 0.582... Epoch 1/10... Train loss: 2.312... Validation loss: 1.688... Validation accuracy: 0.596... Epoch 1/10... Train loss: 2.149... Validation loss: 1.541... Validation accuracy: 0.635... Epoch 1/10... Train loss: 1.898... Validation loss: 1.384... Validation accuracy: 0.651... Epoch 1/10... Train loss: 1.883... Validation loss: 1.340... Validation accuracy: 0.685... Epoch 1/10... Train loss: 1.855... Validation loss: 1.199... Validation accuracy: 0.722... Epoch 1/10... Train loss: 1.968... Validation loss: 1.260... Validation accuracy: 0.691... Epoch 2/10... Train loss: 1.717... Validation loss: 1.287... Validation accuracy: 0.668... Epoch 2/10... Train loss: 1.497... Validation loss: 1.124... Validation accuracy: 0.733... Epoch 2/10... Train loss: 1.646... Validation loss: 1.026... Validation accuracy: 0.756... Epoch 2/10... Train loss: 1.580... Validation loss: 0.989... Validation accuracy: 0.737... Epoch 2/10... Train loss: 1.361... Validation loss: 0.933... Validation accuracy: 0.768... Epoch 2/10... Train loss: 1.477... Validation loss: 0.953... Validation accuracy: 0.771... Epoch 2/10... Train loss: 1.350... Validation loss: 0.920... Validation accuracy: 0.770... Epoch 2/10... Train loss: 1.272... Validation loss: 0.906... Validation accuracy: 0.767... Epoch 2/10... Train loss: 1.399... Validation loss: 0.873... Validation accuracy: 0.770... Epoch 2/10... Train loss: 1.288... Validation loss: 0.802... Validation accuracy: 0.794... Epoch 2/10... Train loss: 1.341... Validation loss: 0.798... Validation accuracy: 0.786... Epoch 2/10... Train loss: 1.214... Validation loss: 0.776... Validation accuracy: 0.793... Epoch 2/10... Train loss: 1.227... Validation loss: 0.727... Validation accuracy: 0.826... Epoch 2/10... Train loss: 1.367... Validation loss: 0.775... Validation accuracy: 0.806... Epoch 2/10... Train loss: 1.290... Validation loss: 0.735... Validation accuracy: 0.829... Epoch 2/10... Train loss: 1.211... Validation loss: 0.728... Validation accuracy: 0.813... Epoch 2/10... Train loss: 1.306... Validation loss: 0.698... Validation accuracy: 0.816... Epoch 2/10... Train loss: 1.240... Validation loss: 0.682... Validation accuracy: 0.835... Epoch 2/10... Train loss: 1.187... Validation loss: 0.659... Validation accuracy: 0.841... Epoch 2/10... Train loss: 1.181... Validation loss: 0.663... Validation accuracy: 0.841... Epoch 2/10... Train loss: 1.182... Validation loss: 0.668... Validation accuracy: 0.828... Epoch 3/10... Train loss: 1.208... Validation loss: 0.644... Validation accuracy: 0.838... Epoch 3/10... Train loss: 1.033... Validation loss: 0.652... Validation accuracy: 0.810... Epoch 3/10... Train loss: 1.072... Validation loss: 0.607... Validation accuracy: 0.839... Epoch 3/10... Train loss: 1.036... Validation loss: 0.560... Validation accuracy: 0.861... Epoch 3/10... Train loss: 1.051... Validation loss: 0.564... Validation accuracy: 0.850... Epoch 3/10... Train loss: 1.057... Validation loss: 0.604... Validation accuracy: 0.834... Epoch 3/10... Train loss: 1.181... Validation loss: 0.595... Validation accuracy: 0.848... Epoch 3/10... Train loss: 1.001... Validation loss: 0.649... Validation accuracy: 0.825... Epoch 3/10... Train loss: 0.951... Validation loss: 0.597... Validation accuracy: 0.843... Epoch 3/10... Train loss: 1.096... Validation loss: 0.588... Validation accuracy: 0.840... Epoch 3/10... Train loss: 1.084... Validation loss: 0.603... Validation accuracy: 0.836... Epoch 3/10... Train loss: 1.047... Validation loss: 0.547... Validation accuracy: 0.851... Epoch 3/10... Train loss: 0.952... Validation loss: 0.562... Validation accuracy: 0.853... Epoch 3/10... Train loss: 1.102... Validation loss: 0.512... Validation accuracy: 0.870... Epoch 3/10... Train loss: 0.831... Validation loss: 0.558... Validation accuracy: 0.847... Epoch 3/10... Train loss: 0.986... Validation loss: 0.504... Validation accuracy: 0.865... Epoch 3/10... Train loss: 0.981... Validation loss: 0.496... Validation accuracy: 0.864... Epoch 3/10... Train loss: 0.987... Validation loss: 0.456... Validation accuracy: 0.883... Epoch 3/10... Train loss: 0.988... Validation loss: 0.478... Validation accuracy: 0.885... Epoch 3/10... Train loss: 0.904... Validation loss: 0.510... Validation accuracy: 0.852... Epoch 4/10... Train loss: 1.162... Validation loss: 0.448... Validation accuracy: 0.882... Epoch 4/10... Train loss: 0.906... Validation loss: 0.473... Validation accuracy: 0.880... Epoch 4/10... Train loss: 0.910... Validation loss: 0.480... Validation accuracy: 0.881... Epoch 4/10... Train loss: 0.861... Validation loss: 0.482... Validation accuracy: 0.872... Epoch 4/10... Train loss: 0.916... Validation loss: 0.441... Validation accuracy: 0.888... Epoch 4/10... Train loss: 0.985... Validation loss: 0.451... Validation accuracy: 0.878... Epoch 4/10... Train loss: 0.704... Validation loss: 0.503... Validation accuracy: 0.863... Epoch 4/10... Train loss: 0.851... Validation loss: 0.466... Validation accuracy: 0.871... Epoch 4/10... Train loss: 1.005... Validation loss: 0.487... Validation accuracy: 0.868... Epoch 4/10... Train loss: 0.912... Validation loss: 0.454... Validation accuracy: 0.877... Epoch 4/10... Train loss: 0.745... Validation loss: 0.406... Validation accuracy: 0.896... Epoch 4/10... Train loss: 0.863... Validation loss: 0.403... Validation accuracy: 0.900... Epoch 4/10... Train loss: 0.794... Validation loss: 0.406... Validation accuracy: 0.897... Epoch 4/10... Train loss: 0.815... Validation loss: 0.426... Validation accuracy: 0.882... Epoch 4/10... Train loss: 0.793... Validation loss: 0.438... Validation accuracy: 0.875... Epoch 4/10... Train loss: 0.952... Validation loss: 0.386... Validation accuracy: 0.893... Epoch 4/10... Train loss: 0.844... Validation loss: 0.416... Validation accuracy: 0.890... Epoch 4/10... Train loss: 0.886... Validation loss: 0.433... Validation accuracy: 0.881... Epoch 4/10... Train loss: 0.894... Validation loss: 0.446... Validation accuracy: 0.876... Epoch 4/10... Train loss: 1.001... Validation loss: 0.458... Validation accuracy: 0.870... Epoch 4/10... Train loss: 0.891... Validation loss: 0.431... Validation accuracy: 0.889... Epoch 5/10... Train loss: 0.871... Validation loss: 0.469... Validation accuracy: 0.873... Epoch 5/10... Train loss: 0.872... Validation loss: 0.411... Validation accuracy: 0.902... Epoch 5/10... Train loss: 0.753... Validation loss: 0.372... Validation accuracy: 0.892... Epoch 5/10... Train loss: 0.830... Validation loss: 0.384... Validation accuracy: 0.901... Epoch 5/10... Train loss: 0.784... Validation loss: 0.418... Validation accuracy: 0.885... Epoch 5/10... Train loss: 0.747... Validation loss: 0.405... Validation accuracy: 0.892... Epoch 5/10... Train loss: 0.755... Validation loss: 0.406... Validation accuracy: 0.897... Epoch 5/10... Train loss: 0.905... Validation loss: 0.390... Validation accuracy: 0.898... Epoch 5/10... Train loss: 0.817... Validation loss: 0.388... Validation accuracy: 0.895... Epoch 5/10... Train loss: 0.846... Validation loss: 0.356... Validation accuracy: 0.909... Epoch 5/10... Train loss: 0.728... Validation loss: 0.370... Validation accuracy: 0.899... Epoch 5/10... Train loss: 0.867... Validation loss: 0.378... Validation accuracy: 0.898... Epoch 5/10... Train loss: 0.869... Validation loss: 0.350... Validation accuracy: 0.909... Epoch 5/10... Train loss: 0.816... Validation loss: 0.362... Validation accuracy: 0.915... Epoch 5/10... Train loss: 0.815... Validation loss: 0.390... Validation accuracy: 0.899... Epoch 5/10... Train loss: 0.772... Validation loss: 0.435... Validation accuracy: 0.880... Epoch 5/10... Train loss: 0.843... Validation loss: 0.375... Validation accuracy: 0.896... Epoch 5/10... Train loss: 0.859... Validation loss: 0.359... Validation accuracy: 0.895... Epoch 5/10... Train loss: 0.619... Validation loss: 0.367... Validation accuracy: 0.905... Epoch 5/10... Train loss: 0.804... Validation loss: 0.343... Validation accuracy: 0.910... Epoch 5/10... Train loss: 0.870... Validation loss: 0.359... Validation accuracy: 0.902... Epoch 6/10... Train loss: 0.738... Validation loss: 0.403... Validation accuracy: 0.878... Epoch 6/10... Train loss: 0.757... Validation loss: 0.393... Validation accuracy: 0.903... Epoch 6/10... Train loss: 0.777... Validation loss: 0.373... Validation accuracy: 0.904... Epoch 6/10... Train loss: 0.750... Validation loss: 0.352... Validation accuracy: 0.903... Epoch 6/10... Train loss: 0.756... Validation loss: 0.368... Validation accuracy: 0.903... Epoch 6/10... Train loss: 0.694... Validation loss: 0.335... Validation accuracy: 0.907... Epoch 6/10... Train loss: 0.786... Validation loss: 0.345... Validation accuracy: 0.903... Epoch 6/10... Train loss: 0.710... Validation loss: 0.365... Validation accuracy: 0.893... Epoch 6/10... Train loss: 0.820... Validation loss: 0.343... Validation accuracy: 0.897... Epoch 6/10... Train loss: 0.726... Validation loss: 0.328... Validation accuracy: 0.915... Epoch 6/10... Train loss: 0.762... Validation loss: 0.349... Validation accuracy: 0.907... Epoch 6/10... Train loss: 0.660... Validation loss: 0.358... Validation accuracy: 0.902... Epoch 6/10... Train loss: 0.798... Validation loss: 0.372... Validation accuracy: 0.900... Epoch 6/10... Train loss: 0.715... Validation loss: 0.353... Validation accuracy: 0.907... Epoch 6/10... Train loss: 0.725... Validation loss: 0.332... Validation accuracy: 0.912... Epoch 6/10... Train loss: 0.821... Validation loss: 0.355... Validation accuracy: 0.906... Epoch 6/10... Train loss: 0.766... Validation loss: 0.362... Validation accuracy: 0.899... Epoch 6/10... Train loss: 0.731... Validation loss: 0.319... Validation accuracy: 0.920... Epoch 6/10... Train loss: 0.701... Validation loss: 0.363... Validation accuracy: 0.908... Epoch 6/10... Train loss: 0.784... Validation loss: 0.356... Validation accuracy: 0.911... Epoch 7/10... Train loss: 0.763... Validation loss: 0.325... Validation accuracy: 0.912... Epoch 7/10... Train loss: 0.780... Validation loss: 0.367... Validation accuracy: 0.900... Epoch 7/10... Train loss: 0.711... Validation loss: 0.330... Validation accuracy: 0.921... Epoch 7/10... Train loss: 0.808... Validation loss: 0.344... Validation accuracy: 0.904... Epoch 7/10... Train loss: 0.647... Validation loss: 0.347... Validation accuracy: 0.908... Epoch 7/10... Train loss: 0.685... Validation loss: 0.359... Validation accuracy: 0.906... Epoch 7/10... Train loss: 0.777... Validation loss: 0.365... Validation accuracy: 0.897... Epoch 7/10... Train loss: 0.665... Validation loss: 0.328... Validation accuracy: 0.917... Epoch 7/10... Train loss: 0.641... Validation loss: 0.305... Validation accuracy: 0.923... Epoch 7/10... Train loss: 0.601... Validation loss: 0.314... Validation accuracy: 0.907... Epoch 7/10... Train loss: 0.706... Validation loss: 0.307... Validation accuracy: 0.912... Epoch 7/10... Train loss: 0.632... Validation loss: 0.322... Validation accuracy: 0.915... Epoch 7/10... Train loss: 0.776... Validation loss: 0.342... Validation accuracy: 0.904... Epoch 7/10... Train loss: 0.668... Validation loss: 0.346... Validation accuracy: 0.901... Epoch 7/10... Train loss: 0.643... Validation loss: 0.323... Validation accuracy: 0.910... Epoch 7/10... Train loss: 0.699... Validation loss: 0.330... Validation accuracy: 0.914... Epoch 7/10... Train loss: 0.689... Validation loss: 0.309... Validation accuracy: 0.918... Epoch 7/10... Train loss: 0.855... Validation loss: 0.304... Validation accuracy: 0.911... Epoch 7/10... Train loss: 0.729... Validation loss: 0.325... Validation accuracy: 0.906... Epoch 7/10... Train loss: 0.702... Validation loss: 0.365... Validation accuracy: 0.894... Epoch 7/10... Train loss: 0.916... Validation loss: 0.334... Validation accuracy: 0.912... Epoch 8/10... Train loss: 0.811... Validation loss: 0.366... Validation accuracy: 0.904... Epoch 8/10... Train loss: 0.676... Validation loss: 0.349... Validation accuracy: 0.902... Epoch 8/10... Train loss: 0.767... Validation loss: 0.380... Validation accuracy: 0.895... Epoch 8/10... Train loss: 0.627... Validation loss: 0.338... Validation accuracy: 0.896... Epoch 8/10... Train loss: 0.729... Validation loss: 0.299... Validation accuracy: 0.920... Epoch 8/10... Train loss: 0.682... Validation loss: 0.314... Validation accuracy: 0.914... Epoch 8/10... Train loss: 0.650... Validation loss: 0.328... Validation accuracy: 0.906... Epoch 8/10... Train loss: 0.812... Validation loss: 0.310... Validation accuracy: 0.921... Epoch 8/10... Train loss: 0.575... Validation loss: 0.333... Validation accuracy: 0.909... Epoch 8/10... Train loss: 0.726... Validation loss: 0.290... Validation accuracy: 0.923... Epoch 8/10... Train loss: 0.642... Validation loss: 0.306... Validation accuracy: 0.906... Epoch 8/10... Train loss: 0.771... Validation loss: 0.294... Validation accuracy: 0.921... Epoch 8/10... Train loss: 0.757... Validation loss: 0.318... Validation accuracy: 0.914... Epoch 8/10... Train loss: 0.644... Validation loss: 0.343... Validation accuracy: 0.910... Epoch 8/10... Train loss: 0.701... Validation loss: 0.322... Validation accuracy: 0.918... Epoch 8/10... Train loss: 0.653... Validation loss: 0.279... Validation accuracy: 0.931... Epoch 8/10... Train loss: 0.760... Validation loss: 0.268... Validation accuracy: 0.928... Epoch 8/10... Train loss: 0.713... Validation loss: 0.280... Validation accuracy: 0.931... Epoch 8/10... Train loss: 0.657... Validation loss: 0.318... Validation accuracy: 0.919... Epoch 8/10... Train loss: 0.677... Validation loss: 0.325... Validation accuracy: 0.909... Epoch 9/10... Train loss: 0.697... Validation loss: 0.327... Validation accuracy: 0.900... Epoch 9/10... Train loss: 0.644... Validation loss: 0.306... Validation accuracy: 0.907... Epoch 9/10... Train loss: 0.625... Validation loss: 0.293... Validation accuracy: 0.921... Epoch 9/10... Train loss: 0.717... Validation loss: 0.296... Validation accuracy: 0.921... Epoch 9/10... Train loss: 0.605... Validation loss: 0.289... Validation accuracy: 0.920... Epoch 9/10... Train loss: 0.612... Validation loss: 0.296... Validation accuracy: 0.921... Epoch 9/10... Train loss: 0.727... Validation loss: 0.304... Validation accuracy: 0.918... Epoch 9/10... Train loss: 0.642... Validation loss: 0.305... Validation accuracy: 0.911... Epoch 9/10... Train loss: 0.578... Validation loss: 0.321... Validation accuracy: 0.912... Epoch 9/10... Train loss: 0.802... Validation loss: 0.296... Validation accuracy: 0.921... Epoch 9/10... Train loss: 0.606... Validation loss: 0.298... Validation accuracy: 0.916... Epoch 9/10... Train loss: 0.544... Validation loss: 0.297... Validation accuracy: 0.920... Epoch 9/10... Train loss: 0.795... Validation loss: 0.280... Validation accuracy: 0.927... Epoch 9/10... Train loss: 0.484... Validation loss: 0.291... Validation accuracy: 0.915... Epoch 9/10... Train loss: 0.779... Validation loss: 0.311... Validation accuracy: 0.917... Epoch 9/10... Train loss: 0.774... Validation loss: 0.282... Validation accuracy: 0.921... Epoch 9/10... Train loss: 0.547... Validation loss: 0.311... Validation accuracy: 0.923... Epoch 9/10... Train loss: 0.743... Validation loss: 0.325... Validation accuracy: 0.916... Epoch 9/10... Train loss: 0.610... Validation loss: 0.333... Validation accuracy: 0.913... Epoch 9/10... Train loss: 0.740... Validation loss: 0.308... Validation accuracy: 0.922... Epoch 9/10... Train loss: 0.576... Validation loss: 0.298... Validation accuracy: 0.919... Epoch 10/10... Train loss: 0.636... Validation loss: 0.337... Validation accuracy: 0.917... Epoch 10/10... Train loss: 0.720... Validation loss: 0.331... Validation accuracy: 0.914... Epoch 10/10... Train loss: 0.615... Validation loss: 0.318... Validation accuracy: 0.930... Epoch 10/10... Train loss: 0.481... Validation loss: 0.297... Validation accuracy: 0.934... Epoch 10/10... Train loss: 0.725... Validation loss: 0.299... Validation accuracy: 0.924... Epoch 10/10... Train loss: 0.564... Validation loss: 0.308... Validation accuracy: 0.909... Epoch 10/10... Train loss: 0.750... Validation loss: 0.288... Validation accuracy: 0.925... Epoch 10/10... Train loss: 0.604... Validation loss: 0.330... Validation accuracy: 0.912... Epoch 10/10... Train loss: 0.741... Validation loss: 0.295... Validation accuracy: 0.922... Epoch 10/10... Train loss: 0.573... Validation loss: 0.297... Validation accuracy: 0.923... Epoch 10/10... Train loss: 0.677... Validation loss: 0.307... Validation accuracy: 0.918... Epoch 10/10... Train loss: 0.603... Validation loss: 0.296... Validation accuracy: 0.921... Epoch 10/10... Train loss: 0.781... Validation loss: 0.285... Validation accuracy: 0.927... Epoch 10/10... Train loss: 0.644... Validation loss: 0.301... Validation accuracy: 0.925... Epoch 10/10... Train loss: 0.736... Validation loss: 0.282... Validation accuracy: 0.929... Epoch 10/10... Train loss: 0.572... Validation loss: 0.293... Validation accuracy: 0.925... Epoch 10/10... Train loss: 0.623... Validation loss: 0.296... Validation accuracy: 0.925... Epoch 10/10... Train loss: 0.655... Validation loss: 0.299... Validation accuracy: 0.926... Epoch 10/10... Train loss: 0.573... Validation loss: 0.283... Validation accuracy: 0.923... Epoch 10/10... Train loss: 0.674... Validation loss: 0.289... Validation accuracy: 0.925... Epoch 10/10... Train loss: 0.765... Validation loss: 0.287... Validation accuracy: 0.922... CPU times: user 1h 1min 10s, sys: 9min 27s, total: 1h 10min 38s Wall time: 1h 2min 44s ###Markdown Testing your networkIt's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well. ###Code # TODO: Do validation on the test set accuracy = 0 step = 0 model.eval() with torch.no_grad(): for inputs, labels in testloader: step += 1 inputs, labels = inputs.to(device), labels.to(device) logps = model.forward(inputs) loss = criterion(logps, labels) ps = torch.exp(logps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)).item() print(f'Test Accuracy: {accuracy/len(testloader):.3f}...') ###Output Test Accuracy: 0.914... ###Markdown Save the checkpointNow that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on.```model.class_to_idx = image_datasets['train'].class_to_idx```Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now. ###Code # TODO: Save the checkpoint checkpoint = {'input_size': 1024, 'hidden_units': 256, 'output_size': 102, 'epochs': epochs, 'batch_size': 64, 'model': models.densenet121(pretrained=True), 'classifier': classifier, 'optimizer': optimizer.state_dict(), 'state_dict': model.state_dict(), 'class_to_idx': train_data.class_to_idx, 'cat_to_name': cat_to_name } torch.save(checkpoint, 'finish_checkpoint.pth') ###Output /opt/conda/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/models/densenet.py:212: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_. ###Markdown Loading the checkpointAt this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network. ###Code # TODO: Write a function that loads a checkpoint and rebuilds the model def load_checkpoint(filepath): checkpoint = torch.load(filepath, map_location='cpu') model = checkpoint['model'] model.classifier = checkpoint['classifier'] model.load_state_dict(checkpoint['state_dict'], strict=False) model.class_to_idx = checkpoint['class_to_idx'] optimizer = checkpoint['optimizer'] epochs = checkpoint['epochs'] categories = checkpoint['cat_to_name'] for param in model.parameters(): param.requires_grad = False return (model, checkpoint['class_to_idx'], categories) model, class_to_idx = load_checkpoint('finish_checkpoint.pth') # Test the model if it's working # TODO: Do validation on the test set model.to(device) accuracy = 0 step = 0 model.eval() with torch.no_grad(): for inputs, labels in testloader: step += 1 inputs, labels = inputs.to(device), labels.to(device) logps = model.forward(inputs) loss = criterion(logps, labels) ps = torch.exp(logps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)).item() print(f'Test Accuracy: {accuracy/len(testloader):.3f}...') ###Output Test Accuracy: 0.914... ###Markdown Inference for classificationNow you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like ```pythonprobs, classes = predict(image_path, model)print(probs)print(classes)> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]> ['70', '3', '45', '62', '55']```First you'll need to handle processing the input image such that it can be used in your network. Image PreprocessingYou'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training. First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.htmlPIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.htmlPIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image.Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`.As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation. And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions. ###Code def process_image(image): ''' Scales, crops, and normalizes a PIL image for a PyTorch model, returns an Numpy array ''' from PIL import Image img = Image.open(image) resize = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225]) ]) return resize(img) ###Output _____no_output_____ ###Markdown To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions). ###Code def imshow(image, ax=None, title=None): """Imshow for Tensor.""" if ax is None: fig, ax = plt.subplots() # PyTorch tensors assume the color channel is the first dimension # but matplotlib assumes is the third dimension image = image.numpy().transpose((1, 2, 0)) # Undo preprocessing mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) image = std * image + mean # Image needs to be clipped between 0 and 1 or it looks like noise when displayed image = np.clip(image, 0, 1) ax.imshow(image) return ax ###Output _____no_output_____ ###Markdown Class PredictionOnce you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values.To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.htmltorch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well.Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes.```pythonprobs, classes = predict(image_path, model)print(probs)print(classes)> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]> ['70', '3', '45', '62', '55']``` ###Code def getKeysByValue(dictOfElements, valueToFind): listOfKeys = list() listOfItems = dictOfElements.items() for item in listOfItems: if item[1] == valueToFind: listOfKeys.append(item[0]) return listOfKeys def predict(image_path, model, categories, topk=5): model.to(device) model.eval() img = process_image(image_path) img = torch.unsqueeze(img, 0) img = img.to(device) logps = model.forward(img) ps = torch.exp(logps) top_p, top_class = ps.topk(topk, dim=1) idx = model.class_to_idx inverse = {v: k for k, v in idx.items()} category_idx = [inverse[y] for x in top_class.tolist() for y in x] if categories != None: category_names = [categories[str(x)] for x in category_idx] return (top_p, category_idx, category_names) else: return (top_p, category_idx, None) probs, classes, category_names = predict('./flowers/test/1/image_06743.jpg', model, None) print(probs) print(classes) ###Output tensor([[ 0.9936, 0.0020, 0.0014, 0.0012, 0.0008]], device='cuda:0') ['1', '83', '70', '62', '96'] ###Markdown Sanity CheckingNow that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this:You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above. ###Code # TODO: Display an image along with the top 5 classes imshow(process_image('./flowers/test/1/image_06743.jpg')) def show_prob(classes, probs, cat_names): converted_classes = [cat_names[x] for x in classes] fig, ax = plt.subplots() ax.barh(converted_classes, probs.tolist()[0]) plt.show() show_prob(classes, probs, cat_to_name) ###Output _____no_output_____
docs_src/index.ipynb
###Markdown Welcome to fastai ###Code from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = cnn_learner(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai ```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate, full_name='rotate') ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *` . In general, for interactive computing, to just play around the core modules and the training loop you can do```from fastai.basics import *```If you want experiment with one of the *applications* such as vision, then you can do```from fastai.vision import *``` That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`cnn_learner`](/vision.learner.htmlcnn_learner)), are also imported. If you don't wish to import any application, but want all the main functionality from fastai, use `from fastai.basics import *`. Of course, you can also just import the specific symbols that you require, without using `import *`.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = cnn_learner(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai ```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate, full_name='rotate') ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`cnn_learner`](/vision.learner.htmlcnn_learner)), are also imported. If you don't wish to import any application, but want all the main functionality from fastai, use `from fastai.basics import *`. Of course, you can also just import the specific symbols that you require, without using `import *`.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai import * from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch-nightly cuda92```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate) ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai import * from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = cnn_learner(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate, full_name='rotate') ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`cnn_learner`](/vision.learner.htmlcnn_learner)), are also imported. If you don't wish to import any application, but want all the main functionality from fastai, use `from fastai.basics import *`. Of course, you can also just import the specific symbols that you require, without using `import *`.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai import * from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch-nightly cuda92```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate) ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai import * from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported. Alternatively, use `from fastai import vision` to import the vision If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai import * from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai_docs/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai_docs">fastai_docs</a> repo. For instance, <a href="https://github.com/fastai/fastai_docs/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`: Conda Install* GPU ``` conda install -c pytorch pytorch-nightly cuda92 conda install -c fastai torchvision-nightly conda install -c fastai fastai ``` * CPU ``` conda install -c pytorch pytorch-nightly-cpu conda install -c fastai torchvision-nightly-cpu conda install -c fastai fastai ```For troubleshooting and alternative installations, see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate) ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai import * from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate, full_name='rotate') ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported. If you don't wish to import any application, but want all the main functionality from fastai, use `from fastai.basics import *`. Of course, you can also just import the specific symbols that you require, without using `import *`.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai import * from fastai.vision import * from fastai.docs import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code untar_data(MNIST_PATH) data = image_data_from_folder(MNIST_PATH) learn = ConvLearner(data, tvm.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai_docs/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai_docs">fastai_docs</a> repo. For instance, <a href="https://github.com/fastai/fastai_docs/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation To install fastai, we recommend `conda` (replace `cuda92` with your CUDA toolkit version):```conda install -c pytorch -c fastai fastai cuda92```For alternative installations, including pip and CPU-only options, see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview, an example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate) ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For [`vision.transform`](/vision.transform.htmlvision.transform), the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters.In a notebook you can also use doc(...) to show the above information, along with a link to the full docs for the class or function. For instance, if you want to know what `learn.fit` does in the example code above, just type `doc(learn.fit)` and this information will appear in a popup panel at the bottom of your window. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai import * from fastai.data import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`ConvLearner`](/vision.learner.htmlConvLearner)), are also imported.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code ConvLearner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate, full_name='rotate') ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported. If you don't wish to import any application, but want all the main functionality from fastai, use `from fastai.basics import *`. Of course, you can also just import the specific symbols that you require, without using `import *`.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai import * from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch-nightly cuda92```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate) ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai import * from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported. Alternatively, use `from fastai import vision` to import the vision If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai import * from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch-nightly cuda92```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate) ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai import * from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate, full_name='rotate') ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported. If you don't wish to import any application, but want all the main functionality from fastai, use `from fastai.basics import *`. Of course, you can also just import the specific symbols that you require, without using `import *`.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai import * from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate, full_name='rotate') ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai import * from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported. Alternatively, use `from fastai import vision` to import the vision If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai import * from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate) ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai import * from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported. Alternatively, use `from fastai import vision` to import the vision If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai import * from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch-nightly cuda92```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate) ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai import * from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported. Alternatively, use `from fastai import vision` to import the vision If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate, full_name='rotate') ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported. If you don't wish to import any application, but want all the main functionality from fastai, use `from fastai.basics import *`. Of course, you can also just import the specific symbols that you require, without using `import *`.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai import * from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = ConvLearner(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai_docs/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai_docs">fastai_docs</a> repo. For instance, <a href="https://github.com/fastai/fastai_docs/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation To install fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch-nightly cuda92```For alternative installations, including pip and CPU-only options, see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate) ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai import * from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`ConvLearner`](/vision.learner.htmlConvLearner)), are also imported.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code ConvLearner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate, full_name='rotate') ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported. If you don't wish to import any application, but want all the main functionality from fastai, use `from fastai.basics import *`. Of course, you can also just import the specific symbols that you require, without using `import *`.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai import * from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch-nightly cuda92```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate) ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai import * from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported. Alternatively, use `from fastai import vision` to import the vision If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = cnn_learner(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai ```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate, full_name='rotate') ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *` . In general, for interactive computing, to just play around the core modules and the training loop you can do```from fastai.basics import *```If you want experiment with one of the *applications* such as vision, then you can do```from fastai.vision import *``` That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`cnn_learner`](/vision.learner.htmlcnn_learner)), are also imported. If you don't wish to import any application, but want all the main functionality from fastai, use `from fastai.basics import *`. Of course, you can also just import the specific symbols that you require, without using `import *`.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = cnn_learner(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai ```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate, full_name='rotate') ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *` . In general, for interactive computing, to just play around the core modules and the training loop you can do```from fastai.basics import *```If you want experiment with one of the *applications* such as vision, then you can do```from fastai.vision import *``` That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`cnn_learner`](/vision.learner.htmlcnn_learner)), are also imported. If you don't wish to import any application, but want all the main functionality from fastai, use `from fastai.basics import *`. Of course, you can also just import the specific symbols that you require, without using `import *`.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____ ###Markdown Welcome to fastai The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai). If you're looking for the source code, head over to the [fastai_pytorch repo](https://github.com/fastai/fastai_pytorch) on GitHub. Installation To install fastai, we recommend `conda`: conda install -c fastai fastaiAlternatively, you can install using pip - if you do so, you'll first need to install the latest pytorch `conda-nightly` package or source from master. pip install fastaifastai is a pure python package, so you can also simply symlink the fastai directory to wherever you're running your code, as long as you've installed the dependencies (listed in the conda and pip files). Reading the docs ###Code from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * from fastai.vision import * ###Output _____no_output_____ ###Markdown To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate) ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai import * from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`ConvLearner`](/vision.learner.htmlConvLearner)), are also imported.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code ConvLearner ###Output _____no_output_____ ###Markdown Welcome to fastai ###Code from fastai.vision import * from fastai.gen_doc.nbdoc import * from fastai.core import * from fastai.basic_train import * ###Output _____no_output_____ ###Markdown The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken at [fast.ai](http://www.fast.ai), including "out of the box" support for [`vision`](/vision.htmlvision), [`text`](/text.htmltext), [`tabular`](/tabular.htmltabular), and [`collab`](/collab.htmlcollab) (collaborative filtering) models. If you're looking for the source code, head over to the [fastai repo](https://github.com/fastai/fastai) on GitHub. For brief examples, see the [examples](https://github.com/fastai/fastai/tree/master/examples) folder; detailed examples are provided in the full documentation (see the sidebar). For example, here's how to train an MNIST model using [resnet18](https://arxiv.org/abs/1512.03385) (from the [vision example](https://github.com/fastai/fastai/blob/master/examples/vision.ipynb)): ###Code path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) learn = create_cnn(data, models.resnet18, metrics=accuracy) learn.fit(1) jekyll_note("""This documentation is all built from notebooks; that means that you can try any of the code you see in any notebook yourself! You'll find the notebooks in the <a href="https://github.com/fastai/fastai/tree/master/docs_src">docs_src</a> folder of the <a href="https://github.com/fastai/fastai">fastai</a> repo. For instance, <a href="https://nbviewer.jupyter.org/github/fastai/fastai/blob/master/docs_src/index.ipynb">here</a> is the notebook source of what you're reading now.""") ###Output _____no_output_____ ###Markdown Installation and updating To install or update fastai, we recommend `conda`:```conda install -c pytorch -c fastai fastai pytorch```For troubleshooting, and alternative installations (including pip and CPU-only options) see the [fastai readme](https://github.com/fastai/fastai/blob/master/README.md). Reading the docs To get started quickly, click *Applications* on the sidebar, and then choose the application you're interested in. That will take you to a walk-through of training a model of that type. You can then either explore the various links from there, or dive more deeply into the various fastai modules.We've provided below a quick summary of the key modules in this library. For details on each one, use the sidebar to find the module you're interested in. Each module includes an overview and example of how to use it, along with documentation for every class, function, and method. API documentation looks, for example, like this: An example function ###Code show_doc(rotate, full_name='rotate') ###Output _____no_output_____ ###Markdown ---Types for each parameter, and the return type, are displayed following standard Python [type hint syntax](https://www.python.org/dev/peps/pep-0484/). Sometimes for compound types we use [type variables](/fastai_typing.html). Types that are defined by fastai or Pytorch link directly to more information about that type; try clicking *Image* in the function above for an example. The docstring for the symbol is shown immediately after the signature, along with a link to the source code for the symbol in GitHub. After the basic signature and docstring you'll find examples and additional details (not shown in this example). As you'll see at the top of the page, all symbols documented like this also appear in the table of contents.For inherited classes and some types of decorated function, the base class or decorator type will also be shown at the end of the signature, delimited by `::`. For `vision.transforms`, the random number generator used for data augmentation is shown instead of the type, for randomly generated parameters. Module structure Imports fastai is designed to support both interactive computing as well as traditional software development. For interactive computing, where convenience and speed of experimentation is a priority, data scientists often prefer to grab all the symbols they need, with `import *`. Therefore, fastai is designed to support this approach, without compromising on maintainability and understanding.In order to do so, the module dependencies are carefully managed (see next section), with each exporting a carefully chosen set of symbols when using `import *`. In general, for interactive computing, you'll want to import from both `fastai`, and from one of the *applications*, such as: ###Code from fastai.vision import * ###Output _____no_output_____ ###Markdown That will give you all the standard external modules you'll need, in their customary namespaces (e.g. `pandas as pd`, `numpy as np`, `matplotlib.pyplot as plt`), plus the core fastai libraries. In addition, the main classes and functions for your application ([`fastai.vision`](/vision.htmlvision), in this case), e.g. creating a [`DataBunch`](/basic_data.htmlDataBunch) from an image folder and training a convolutional neural network (with [`create_cnn`](/vision.learner.htmlcreate_cnn)), are also imported. If you don't wish to import any application, but want all the main functionality from fastai, use `from fastai.basics import *`. Of course, you can also just import the specific symbols that you require, without using `import *`.If you wish to see where a symbol is imported from, either just type the symbol name (in a REPL such as Jupyter Notebook or IPython), or (in most editors) wave your mouse over the symbol to see the definition. For instance: ###Code Learner ###Output _____no_output_____
notebooks/7.1-jalvaradoruiz-visualizacion-data-top.ipynb
###Markdown Por Tribunal y por delitos ###Code causas_juiciooral_ley20000.groupby(by=['MATERIA_x','TRIBUNAL'])#.count().sort_values("index", ascending=False).head(250) causas_juiciooral_ley20000['RIT TOP'] = str(causas_juiciooral_ley20000['COD. TRIBUNAL']) + "-" + causas_juiciooral_ley20000['RIT'] causas_juiciooral_ley20000 data = [] for (materia,tribunal), sub_df in causas_oral.groupby(by=['MATERIA_x','TRIBUNAL']): total = len(sub_df.RIT.unique()) poblacion = sub_df.iloc[0].POBLACION urbano = sub_df.iloc[0].URBANO rural = sub_df.iloc[0].RURAL proporcion = total / poblacion territorial = rural / urbano row = [materia, tribunal, total, poblacion, proporcion, territorial] data.append(row) ley20000 = pd.DataFrame(data, columns=['MATERIA','TRIBUNAL','TOTAL','POBLACION','PROPORCION', 'TERRITORIAL']).sort_values(["PROPORCION","MATERIA"], ascending=False)[:50] ley20000 sns.set_style(style="ticks") sns.pairplot(ley20000, hue="TRIBUNAL") f, ax = plt.subplots(figsize=(7, 5)) sns.despine(f) sns.histplot( ley20000, x="MATERIA", hue="TRIBUNAL", multiple="stack", palette="light:m_r", edgecolor=".3", linewidth=.5, log_scale=True, ) ax.xaxis.set_major_formatter(mpl.ticker.ScalarFormatter()) ax.set_xticks([500, 1000, 2000, 5000, 10000]) nodrogas = causas_oral.query("`TIPOLOGIA MATERIA` != 'LEY 20.000 TRAFICO ILICITO DE ESTUPEFACIENTES Y SUSTANCIAS SICOTROPICAS'") data = [] for (materia,tribunal), sub_df in nodrogas.groupby(by=['MATERIA_x','TRIBUNAL']): total = len(sub_df.RIT.unique()) poblacion = sub_df.iloc[0].POBLACION proporcion = total / poblacion row = [materia, tribunal, total, poblacion, proporcion] data.append(row) pd.DataFrame(data, columns=['MATERIA','TRIBUNAL','TOTAL','POBLACION','PROPORCION']).sort_values("PROPORCION", ascending=False)[:50] ###Output _____no_output_____
docs/examples/UserRequests/Sweeping two things at once.ipynb
###Markdown Sweeping two things at onceThis example notebook shows how to sweep two parameters in one go Setting up the dummy experimentOur dummy experiment consists of a DAC and a DMM. The DAC has two channels each connected to one port of the DMM. One Port measures the square of the DAC voltage, the other measures the cube of the DAC voltage (this is my understanding of a physical system!) ###Code import qcodes as qc from qcodes.utils.wrappers import init, do1d from qcodes.tests.instrument_mocks import DummyInstrument dac = DummyInstrument('dac', gates=['ch1', 'ch2']) dmm = DummyInstrument('dmm', gates=['v1', 'v2']) station = qc.Station(dac, dmm) init('./sandboxdata', 'sandboxsample', station, annotate_image=False) # set up some simulated wiring and physical system between the dac and dmm dmm.v1.get = lambda: dac.ch1.get()**2 dmm.v2.get = lambda: dac.ch2.get()**3 ###Output _____no_output_____ ###Markdown Check that this makes sense ###Code do1d(dac.ch1, -2, 2, 25, 0.1, dmm.v1) do1d(dac.ch2, -2, 2, 25, 0.1, dmm.v2) ###Output Started at 2017-10-17 11:14:13 DataSet: location = '/Users/william/sourcecodes/qdev-dk/docs/examples/Sandbox/sandboxdata/sandboxsample/050' <Type> | <array_id> | <array.name> | <array.shape> Setpoint | dac_ch2_set | ch2 | (25,) Measured | dmm_v2 | v2 | (25,) Finished at 2017-10-17 11:14:16 ###Markdown Simultaneously sweep both in opposite directionsNow let's sweep both voltages in opposite directions simultaneously ###Code # Define the mapping from one param two the two voltages def combi_setter(value): dac.ch1.set(value) dac.ch2.set(-value) # And simply add this as a new parameter two the instrument dac.add_parameter('combi', label='Dac Ch1 and minus Dac Ch2', set_cmd=combi_setter) ###Output _____no_output_____ ###Markdown Sweep and see that we get the expected traces ###Code do1d(dac.combi, -2, 2, 50, 0.2, dmm.v1, dmm.v2, use_threads=False) ###Output Started at 2017-10-17 11:17:19 DataSet: location = '/Users/william/sourcecodes/qdev-dk/docs/examples/Sandbox/sandboxdata/sandboxsample/051' <Type> | <array_id> | <array.name> | <array.shape> Setpoint | dac_combi_set | combi | (50,) Measured | dmm_v1 | v1 | (50,) Measured | dmm_v2 | v2 | (50,) Finished at 2017-10-17 11:17:32
quantitative_economics_with_python/28_cake_eating_i_introduction_to_optimal_saving.ipynb
###Markdown Cake Eating I: Introduction to Optimal Saving Contents- [Cake Eating I: Introduction to Optimal Saving](Cake-Eating-I:-Introduction-to-Optimal-Saving) - [Overview](Overview) - [The Model](The-Model) - [The Value Function](The-Value-Function) - [The Optimal Policy](The-Optimal-Policy) - [The Euler Equation](The-Euler-Equation) - [Exercises](Exercises) - [Solutions](Solutions) OverviewIn this lecture we introduce a simple “cake eating” problem.The intertemporal problem is: how much to enjoy today and how much to leavefor the future?Although the topic sounds trivial, this kind of trade-off between currentand future utility is at the heart of many savings and consumption problems.Once we master the ideas in this simple environment, we will apply them toprogressively more challenging—and useful—problems.The main tool we will use to solve the cake eating problem is dynamic programming.Readers might find it helpful to review the following lectures before reading this one:- The [shortest paths lecture](https://python.quantecon.org/short_path.html) - The [basic McCall model](https://python.quantecon.org/mccall_model.html) - The [McCall model with separation](https://python.quantecon.org/mccall_model_with_separation.html) - The [McCall model with separation and a continuous wage distribution](https://python.quantecon.org/mccall_fitted_vfi.html) In what follows, we require the following imports: ###Code %matplotlib inline import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (11, 5) #set default figure size import numpy as np ###Output _____no_output_____ ###Markdown The ModelWe consider an infinite time horizon $ t=0, 1, 2, 3.. $At $ t=0 $ the agent is given a complete cake with size $ \bar x $.Let $ x_t $ denote the size of the cake at the beginning of each period,so that, in particular, $ x_0=\bar x $.We choose how much of the cake to eat in any given period $ t $.After choosing to consume $ c_t $ of the cake in period $ t $ there is$$x_{t+1} = x_t - c_t$$left in period $ t+1 $.Consuming quantity $ c $ of the cake gives current utility $ u(c) $.We adopt the CRRA utility function$$u(c) = \frac{c^{1-\gamma}}{1-\gamma} \qquad (\gamma \gt 0, \, \gamma \neq 1) \tag{28.1}$$In Python this is ###Code def u(c, γ): return c**(1 - γ) / (1 - γ) ###Output _____no_output_____ ###Markdown Future cake consumption utility is discounted according to $ \beta\in(0, 1) $.In particular, consumption of $ c $ units $ t $ periods hence has present value $ \beta^t u(c) $The agent’s problem can be written as$$\max_{\{c_t\}} \sum_{t=0}^\infty \beta^t u(c_t) \tag{28.2}$$subject to$$x_{t+1} = x_t - c_t\quad \text{and} \quad0\leq c_t\leq x_t \tag{28.3}$$for all $ t $.A consumption path $ \{c_t\} $ satisfying [(28.3)](equation-cake-feasible) where$ x_0 = \bar x $ is called **feasible**.In this problem, the following terminology is standard:- $ x_t $ is called the **state variable** - $ c_t $ is called the **control variable** or the **action** - $ \beta $ and $ \gamma $ are **parameters** Trade-OffThe key trade-off in the cake-eating problem is this:- Delaying consumption is costly because of the discount factor. - But delaying some consumption is also attractive because $ u $ is concave. The concavity of $ u $ implies that the consumer gains value from*consumption smoothing*, which means spreading consumption out over time.This is because concavity implies diminishing marginal utility—a progressively smaller gain in utility for each additional spoonful of cake consumed within one period. IntuitionThe reasoning given above suggests that the discount factor $ \beta $ and the curvature parameter $ \gamma $ will play a key role in determining the rate of consumption.Here’s an educated guess as to what impact these parameters will have.First, higher $ \beta $ implies less discounting, and hence the agent is more patient, which should reduce the rate of consumption.Second, higher $ \gamma $ implies that marginal utility $ u'(c) =c^{-\gamma} $ falls faster with $ c $.This suggests more smoothing, and hence a lower rate of consumption.In summary, we expect the rate of consumption to be *decreasing in bothparameters*.Let’s see if this is true. The Value FunctionThe first step of our dynamic programming treatment is to obtain the Bellmanequation.The next step is to use it to calculate the solution. The Bellman EquationTo this end, we let $ v(x) $ be maximum lifetime utility attainable fromthe current time when $ x $ units of cake are left.That is,$$v(x) = \max \sum_{t=0}^{\infty} \beta^t u(c_t) \tag{28.4}$$where the maximization is over all paths $ \{ c_t \} $ that are feasiblefrom $ x_0 = x $.At this point, we do not have an expression for $ v $, but we can stillmake inferences about it.For example, as was the case with the [McCall model](https://python.quantecon.org/mccall_model.html), thevalue function will satisfy a version of the *Bellman equation*.In the present case, this equation states that $ v $ satisfies$$v(x) = \max_{0\leq c \leq x} \{u(c) + \beta v(x-c)\}\quad \text{for any given } x \geq 0. \tag{28.5}$$The intuition here is essentially the same it was for the McCall model.Choosing $ c $ optimally means trading off current vs future rewards.Current rewards from choice $ c $ are just $ u(c) $.Future rewards given current cake size $ x $, measured from next period andassuming optimal behavior, are $ v(x-c) $.These are the two terms on the right hand side of [(28.5)](equation-bellman-cep), aftersuitable discounting.If $ c $ is chosen optimally using this trade off strategy, then we obtain maximal lifetime rewards from our current state $ x $.Hence, $ v(x) $ equals the right hand side of [(28.5)](equation-bellman-cep), as claimed. An Analytical SolutionIt has been shown that, with $ u $ as the CRRA utility function in[(28.1)](equation-crra-utility), the function$$v^*(x_t) = \left( 1-\beta^{1/\gamma} \right)^{-\gamma}u(x_t) \tag{28.6}$$solves the Bellman equation and hence is equal to the value function.You are asked to confirm that this is true in the exercises below.The solution [(28.6)](equation-crra-vstar) depends heavily on the CRRA utility function.In fact, if we move away from CRRA utility, usually there is no analyticalsolution at all.In other words, beyond CRRA utility, we know that the value function stillsatisfies the Bellman equation, but we do not have a way of writing itexplicitly, as a function of the state variable and the parameters.We will deal with that situation numerically when the time comes.Here is a Python representation of the value function: ###Code def v_star(x, β, γ): return (1 - β**(1 / γ))**(-γ) * u(x, γ) ###Output _____no_output_____ ###Markdown And here’s a figure showing the function for fixed parameters: ###Code β, γ = 0.95, 1.2 x_grid = np.linspace(0.1, 5, 100) fig, ax = plt.subplots() ax.plot(x_grid, v_star(x_grid, β, γ), label='value function') ax.set_xlabel('$x$', fontsize=12) ax.legend(fontsize=12) plt.show() ###Output _____no_output_____ ###Markdown The Optimal PolicyNow that we have the value function, it is straightforward to calculate theoptimal action at each state.We should choose consumption to maximize theright hand side of the Bellman equation [(28.5)](equation-bellman-cep).$$c^* = \arg \max_{c} \{u(c) + \beta v(x - c)\}$$We can think of this optimal choice as a function of the state $ x $, inwhich case we call it the **optimal policy**.We denote the optimal policy by $ \sigma^* $, so that$$\sigma^*(x) := \arg \max_{c} \{u(c) + \beta v(x - c)\}\quad \text{for all } x$$If we plug the analytical expression [(28.6)](equation-crra-vstar) for the value functioninto the right hand side and compute the optimum, we find that$$\sigma^*(x) = \left( 1-\beta^{1/\gamma} \right) x \tag{28.7}$$Now let’s recall our intuition on the impact of parameters.We guessed that the consumption rate would be decreasing in both parameters.This is in fact the case, as can be seen from [(28.7)](equation-crra-opt-pol).Here’s some plots that illustrate. ###Code def c_star(x, β, γ): return (1 - β ** (1/γ)) * x ###Output _____no_output_____ ###Markdown Continuing with the values for $ \beta $ and $ \gamma $ used above, theplot is ###Code fig, ax = plt.subplots() ax.plot(x_grid, c_star(x_grid, β, γ), label='default parameters') ax.plot(x_grid, c_star(x_grid, β + 0.02, γ), label=r'higher $\beta$') ax.plot(x_grid, c_star(x_grid, β, γ + 0.2), label=r'higher $\gamma$') ax.set_ylabel(r'$\sigma(x)$') ax.set_xlabel('$x$') ax.legend() plt.show() ###Output _____no_output_____
api-book/_build/jupyter_execute/chapter-4-ML/machine-learning-model.ipynb
###Markdown Creating machine learning models High level ML project managment The need and the steps taken when creating a machine learning model usually falls into the following steps: ![ml-flow](media/ml-flow.png)* The business first needs to define the problem and the potential value that a solution will bring. * The second step is to transfer the business problem into a machine learning problem. * The third step is to run alot of experiments: try out many ML algorithms, do feature engineering, debate with your colleagues and present the results. * The final step is to decide which model to use and start thinking about deployment. The deployment part historicaly is not the part of an ML practioner but this is changing rapidly. If any problem is to big to overcome in a given step, then a team should go a step back and rethink the previous step. Business problem ![CSGO-logo](media/csgo-logo.jpeg) Imagine that we are working in a huge analytics company and our new task is to model the probability of Counter Terrorist (**CT** for short) team winning a Counter Strike: Global Offensive (**CSGO** for short) game.The rules of the game are simple: there are two teams, named terrorists and counter-terrorists, each consisting of 5 players. At the start of the round each player buys weapons, armor and other equipment and the objective is to win the match.To read more about the game visit the official website: https://blog.counter-strike.net/index.php/about/This esport is very popular and our analytics company is trying to break into the gaming market with a very accurate model which will be shown on TV, on gaming streams and other places. Rules of the game The ultimate victory of a CSGO match is when a team, either CT or T, earn **16 points**. A point is earn when a match is won. Match winning criteria: * A given team eliminates all 5 players of the oposite team. * If the terrorists have planted the bomb, then the winning criteria for a CT team is to defuse the bomb and for the T team to win the match the bomb needs to explode.The maximum number of seconds in a match is **175.00**.There are 5 CT and 5 T players on match start. Each of them have **100 hit points (HP)** and can buy up to **100 armor** and a helmet.Players earn in game dollars during a match which can be spent on weapons, grenades, armor and other accesories. Machine learning problem After the business problem is defined and the rules of the game are clear, we now need to convert the business problem into a machine learning problem. If we define: $$ \mathbb{Y}_{i} = \{0, 1\}, \forall i = 1, ..., n$$ $$ \mathbb{X}_{i} \in R^{p}, \forall i = 1, ..., n$$Where$i$ - observation i. $n$ - total number of observations.$p$ - number of features.Then we are trying to create a model for the probability to observe $\mathbb{Y}=1$ event given $\mathbb{X}$:$$P(\mathbb{Y}=1|\mathbb{X}) \in (0, 1)$$$\mathbb{Y} = 1$ means that the CT team have won and the $\mathbb{Y} = 0$ means that CT team have lost.The function $f$ that links $\mathbb{X}$ to $\mathbb{Y}$ is the machine learning model which are trying to build:$$ f: \mathbb{X} \rightarrow \mathbb{Y} $$ Because we are trying to predict an observation falling into one of two classes (CT winning or losing) the machine learning model $f$ can be called a *binary classifier*. Python package imports The first thing that any developer or a ML practioner does is load up packages which are installed into his/hers machine. ###Code # Data reading import pandas as pd # Main modeling class import xgboost as xgb # Data spliting from sklearn.model_selection import train_test_split # Plotting library import matplotlib.pyplot as plt import seaborn as sns # Array math import numpy as np # Modeling frameworks from sklearn.linear_model import LogisticRegression import xgboost as xgb # Accuracy metrics from sklearn.metrics import roc_auc_score, roc_curve # Hp parameter search from sklearn.model_selection import ParameterGrid # Model saving import pickle # Operating system functionalities import os # JSON saving and loading import json ###Output _____no_output_____ ###Markdown Reading data Finding, cleaning and labelling data is ussually a long and painfull process. This is not the main emphasis of this book so lets imagine that we have already spent months in creating the beautifull dataset which we will read.The original dataset can be found here: https://www.kaggle.com/christianlillelund/csgo-round-winner-classification ###Code # Using pandas to read a csv file d = pd.read_csv("data/data.csv") # Printing the shape of data print(f"Number of observations: {d.shape[0]}") print(f"Number of features: {d.shape[1]}") # Getting the feature names d.columns.values # Displaying a snippet of data print(d.head()) ###Output time_left ct_score t_score map bomb_planted ct_health t_health \ 0 175.00 0.0 0.0 de_dust2 False 500.0 500.0 1 156.03 0.0 0.0 de_dust2 False 500.0 500.0 2 96.03 0.0 0.0 de_dust2 False 391.0 400.0 3 76.03 0.0 0.0 de_dust2 False 391.0 400.0 4 174.97 1.0 0.0 de_dust2 False 500.0 500.0 ct_armor t_armor ct_money ... t_grenade_flashbang \ 0 0.0 0.0 4000.0 ... 0.0 1 400.0 300.0 600.0 ... 0.0 2 294.0 200.0 750.0 ... 0.0 3 294.0 200.0 750.0 ... 0.0 4 192.0 0.0 18350.0 ... 0.0 ct_grenade_smokegrenade t_grenade_smokegrenade \ 0 0.0 0.0 1 0.0 2.0 2 0.0 2.0 3 0.0 0.0 4 0.0 0.0 ct_grenade_incendiarygrenade t_grenade_incendiarygrenade \ 0 0.0 0.0 1 0.0 0.0 2 0.0 0.0 3 0.0 0.0 4 0.0 0.0 ct_grenade_molotovgrenade t_grenade_molotovgrenade \ 0 0.0 0.0 1 0.0 0.0 2 0.0 0.0 3 0.0 0.0 4 0.0 0.0 ct_grenade_decoygrenade t_grenade_decoygrenade round_winner 0 0.0 0.0 CT 1 0.0 0.0 CT 2 0.0 0.0 CT 3 0.0 0.0 CT 4 0.0 0.0 CT [5 rows x 97 columns] ###Markdown A short description about the data from the kaggle source: *The dataset consists of round snapshots from about 700 demos from high level tournament play in 2019 and 2020. Warmup rounds and restarts have been filtered, and for the remaining live rounds a round snapshot has been recorded every 20 seconds until the round is decided. Following its initial publication, It has been pre-processed and flattened to improve readability and make it easier for algorithms to process. The total number of snapshots is 122411. **Snapshots are i.i.d and should be treated as individual data points**, not as part of a match.* The feature that will be used for the creation of $\mathbb{Y}$ variable is **round_winner**. If CT have won, then the value of $\mathbb{Y}$ will be 1 and 0 othervise. ###Code # Creating the Y variable d['Y'] = [1 if x == 'CT' else 0 for x in d['round_winner']] # Inspecting the distribution of the classes distribution = d.groupby('Y', as_index=False).size() distribution['Y'] = distribution['Y'].astype(str) distribution['share'] = distribution['size'] / distribution['size'].sum() plt.bar( distribution['Y'], distribution['share'], edgecolor='black' ) plt.title("Share of binary responses in data") plt.ylabel("Share in data") plt.xlabel("Response value") plt.show() ###Output _____no_output_____ ###Markdown The classes are almost perfectly balanced. Dropping inconsistancies ###Code d = d[(d['t_players_alive']<=5) & (d['ct_players_alive']<=5)].copy() ###Output _____no_output_____ ###Markdown Feature engineering Feature engineering is the process of using domain knowledge to create additional features from the raw features in data. Alot of experimentation time is spent here and not all the features created end up improving the model. Nevertheless, if we create atlest one new feature from the given list of features which improves the performance of our classifier then we have added imense value to the original dataset without investing into new data collection.The AI expert Andrew Ng has proposed that the current ML industry should move from the model centric approach to the data centric approach {cite}`data_centric`:*"If 80 percent of our work is data preparation, then ensuring data quality is the important work of a machine learning team."*Andrew Ng urges to shift the focus from trying out new models while fixing a dataset and instead fix a model and then engineer new features, label new data points and do other data related experiments. Regardless of which school of thought wins out, developing new features is paramount in either case. ###Code # Boolean for the planting of the bomb event d['bomb_planted'] = [1 if x else 0 for x in d['bomb_planted']] # The differences between the team scores d['team_score_diff'] = d['ct_score'] - d['t_score'] # Putting the team_score_diff into buckets cut_bins_score = [-15, -5, 0, 5, 15] d['team_score_diff'] = pd.cut(d['team_score_diff'], bins=cut_bins_score) # Calculating the share of remaining health of CT d['ct_health_share'] = d['ct_health'] / (d['t_health'] + d['ct_health']) # Calculating the armor share d['ct_armor_per_player'] = d['ct_armor'] / d['ct_players_alive'] # Total money share owned by CT d['ct_money_share'] = d['ct_money'] / (d['t_money'] + d['ct_money']) # Difference between alive CT players and T players d['ct_players_alive_diff'] = d['ct_players_alive'] - d['t_players_alive'] # Is there a defuse kit in CT team? d['ct_defuse_kit_present'] = [1 if x > 0 else 0 for x in d['ct_defuse_kits']] ###Output _____no_output_____ ###Markdown Explanatory Data Analysis Bomb planting event ###Code # Calculating the probability of winning when a bomb is planted prob_w = d.groupby(['bomb_planted'])['Y'].agg(['sum', 'size']) prob_w['prob_of_win_CT'] = prob_w['sum'] / prob_w['size'] # Adding a custom index prob_w.index = ['bomb not planted', 'bomb planted'] # Ploting the results plt.bar( prob_w.index, prob_w['prob_of_win_CT'], edgecolor='black' ) plt.title("Probability of CT winning") plt.ylabel("Probability") plt.show() print(prob_w) ###Output sum size prob_of_win_CT bomb not planted 56904 108725 0.523375 bomb planted 3100 13684 0.226542 ###Markdown As we can see, if a bomb is planted, the odds of winning for a CT squad is more than two times lower than if the bomb is not planted: **0.52** and **0.22** respectively. Maps ###Code # Calculating the probability of winning when a bomb is planted prob_w = d.groupby(['map'])['Y'].agg(['sum', 'size']) prob_w['prob_of_win_CT'] = prob_w['sum'] / prob_w['size'] # Ploting the results plt.figure(figsize=(12, 7)) plt.bar( prob_w.index, prob_w['prob_of_win_CT'], edgecolor='black' ) plt.title("Probability of CT winning") plt.ylabel("Probability") plt.axhline(y=0.5, color='r', linestyle='--') plt.show() print(prob_w) ###Output sum size prob_of_win_CT map de_cache 103 145 0.710345 de_dust2 10158 22144 0.458725 de_inferno 10810 23811 0.453992 de_mirage 9144 18576 0.492248 de_nuke 10214 19025 0.536873 de_overpass 7026 14081 0.498970 de_train 7310 13491 0.541843 de_vertigo 5239 11136 0.470456 ###Markdown The map **de_cache** seems to be a clear outlier in the dataset: the CTs are winning in this map more than 70% of the maches. Tilting The definition of tilting in esports is ***state of mental or emotional confusion or frustration***. We can measure that by the influence of the current score of matches in favor of CTs to the probablity of winning. ###Code # Calculating the probability of winning when a bomb is planted prob_w = d.groupby(['team_score_diff'])['Y'].agg(['sum', 'size']) prob_w['prob_of_win_CT'] = prob_w['sum'] / prob_w['size'] # Adjusting the index prob_w.index = [str(x) for x in prob_w.index] # Ploting the results plt.figure(figsize=(10, 6)) plt.bar( prob_w.index, prob_w['prob_of_win_CT'], edgecolor='black' ) plt.title("Probability of CT winning") plt.ylabel("Probability") plt.xlabel("Difference between scores in favor of CT") plt.axhline(y=0.5, color='r', linestyle='--') plt.show() ###Output _____no_output_____ ###Markdown There is a relationship between the matches won by CT and the probability of winning the current match: the bigger the difference between the match score in favor of CT, the higher the chances of winning. Health, armor and money influence ###Code # Ploting the distributions of CT health share plt.figure(figsize=(10, 6)) plt.hist( d.loc[d['Y']==1, 'ct_health_share'].values, alpha=0.5, label='CT won match', edgecolor='black', bins=20 ) plt.hist( d.loc[d['Y']==0, 'ct_health_share'].values, alpha=0.5, label='CT won match', edgecolor='black', bins=20 ) plt.legend() plt.title("Distribution of CT health share of total HP pool by match win event") plt.ylabel("Number of matches") plt.xlabel("Share of total HP pool") plt.show() ###Output _____no_output_____ ###Markdown As our intuition suggested, the more total health is comprised of CT HP, the bigger the probability of winning. ###Code plt.figure(figsize=(10, 6)) sns.kdeplot( d.loc[d['Y']==1, 'ct_armor_per_player'].values, shade=True, linewidth=2, label = 'CT won match' ) sns.kdeplot( d.loc[d['Y']==0, 'ct_armor_per_player'].values, shade=True, linewidth=2, label = 'CT lost match' ) plt.legend() plt.title("Distribution of CT armor per player by match win event") plt.ylabel("Share of matches") plt.xlabel("Armor per player") plt.show() ###Output _____no_output_____ ###Markdown The density of CT winning a match is "bigger" the more there is armor per player. ###Code plt.figure(figsize=(10, 6)) plt.hist( d.loc[d['Y']==1, 'ct_money_share'].values, alpha=0.5, label='CT won match', edgecolor='black', bins=20 ) plt.hist( d.loc[d['Y']==0, 'ct_money_share'].values, alpha=0.5, label='CT lost match', edgecolor='black', bins=20 ) plt.legend() plt.title("Distribution of all money owned by CT by match win event") plt.ylabel("Number of matches") plt.xlabel("Share of total money owned") plt.show() ###Output _____no_output_____ ###Markdown As with the health case, having more of the total economy in the game helps positively to win a match. Impact of alive players ###Code # Calculating the probability of winning when a bomb is planted prob_w = d.groupby(['ct_players_alive', 't_players_alive'], as_index=False)['Y'].agg(['sum', 'size']) prob_w['prob_of_win_CT'] = prob_w['sum'] / prob_w['size'] # Droping the obvious cases of CT=0 and T=0 prob_w = prob_w[[False if x[0]==0.0 or x[1]==0.0 else True for x in prob_w.index]] # Creating a dataframe for a heatmap heatmap_df = pd.DataFrame({ 'ct_players_alive': prob_w.index.get_level_values(0), 't_players_alive': prob_w.index.get_level_values(1), 'p': prob_w['prob_of_win_CT'] }) heatmap_df = heatmap_df.pivot(index='ct_players_alive', columns='t_players_alive', values='p') # Drawing the heatmap plt.figure(figsize=(8, 8)) sns.heatmap(heatmap_df, linewidths=.5, cmap="YlGnBu") plt.title("Heatmap of probability to win vs alive players") plt.show() ###Output _____no_output_____ ###Markdown Even having one player advantage in a CSGO match leads to huge increases in probability of winning. The highest probability to win is where there are alot of alive CT players and not much alive T players. Defusal kit necesity If a bomb is planted in the game, the only way to defuse it is with a difusal kit. ###Code # Calculating the probability of winning when a bomb is planted prob_w = d.groupby(['ct_defuse_kit_present'])['Y'].agg(['sum', 'size']) prob_w['prob_of_win_CT'] = prob_w['sum'] / prob_w['size'] # Adding a custom index prob_w.index = ['Defuse kit not present', 'Defuse kit present'] # Ploting the results plt.bar( prob_w.index, prob_w['prob_of_win_CT'], edgecolor='black' ) plt.title("Probability of CT winning") plt.ylabel("Probability") plt.show() prob_w ###Output _____no_output_____ ###Markdown Having a defusal kit in a team really proves to be beneficial! Evaluating model performance In order to compare algorithms with one another or to measure the impact of new data and features, we need to have a performance metric (or more than one). One of the most popular metrics in measuring binary classifiers is the **Area Under the Curve metric (AUC)**. In order to have a grasp on AUC we first need to make sense of some intermediate definitions. Confusion matrix In the field of machine learning and specifically the problem of statistical classification, a confusion matrix is a specific table layout that allows visualization of the performance of an algorithm. Each row of the matrix represents the instances in an actual class while each column represents the instances in a predicted class, or vice versa – both variants are found in various textbooks and articles. ![confusion-matrix](media/confusion-matrix.jpg)The abbreviations stand for: **TP** - True Positives **FN** - False Negatives **FP** - False Positives **TN** - True Negatives The **actual values** refer to the actual ending of the matches. In our case, if CT have won this is termed as a *positive* and if CT have lost then this is termed as a *negative*. The predicted values refer to the outcome predicted by the machine learning algorithm. Thus: * If a match is actually won by CT and our algorithm predicted the same, then that observation is a True Positive. * If a match is actually won by CT but our algorithm predicted that CT lost, then that observation is a False Negative. * If a match is actually lost by CT but our algorithm predicted that CT won, then that observation is a False Positive. * If a match is actually lost by CT and our algorithm predicted that CT have lost, then that observation is a True Negative. A perfect classifier would have only TPs and TNs in the confusion matrix and no FNs and FPs. Most of the time, this is not the case. Model treshold Most of the popular ML models do not just output 1 or 0 (meaning that CT have won or lost) given a set of features $\mathbb{X}$. Rather, they output a **probability**. Recall, that a binary classifier is just a probability model that: $$ f(\mathbb{X}) = P(\mathbb{Y} = 1| \mathbb{X}) \in (0, 1)$$So the output of the algorithm can be 0.0148, 0.5897, 0.998 and so on. By default, a label of 1 (CT winning a match) is given to an observation when $f(\mathbb{X}) \geqslant 0.5$. In other words, the treshold **t** = 0.5. In general terms: $$ y_{predicted} = \begin{cases} 1, & f(\mathbb{X}) \geqslant t \\0, & f(\mathbb{X}) < t \end{cases} t \in (0, 1)$$Altough it is generaly advised to have the default treshold of 0.5, but in some cases a user can vary the treshold to achieve better results. Receiver operating characteristic curve (ROC) A receiver operating characteristic curve, or **ROC** curve, is a graphical plot that illustrates the performance of a binary classifier as the threshold is varied. It is a 2D plot where the X axis is the **False Positive Rate (FPR)** and the Y axis is the **True Positive Rate (TPR)**. FPR and TPR are defined as follows: $$FPR = \dfrac{FP}{N}$$ $$TPR = \dfrac{TP}{P}$$Here **FP** - number of false positives generated by the classifier, **TP** - number of true positives generated by the classifier and **N** and **P** are the total number of "negative" and "positive" class observations in the data respectively.An example ROC plot: ![roc-curve](media/roc-example.png)Notice that the axis values are in the interval **[0, 1]**. Altough it may not look like it, but the orange curve is made up out of alot of points who are connected to make a line (hence the term "curve"). Every point was gotten using a different treshold **t**. We always want a classifier whose ROC curve spikes as much as possible to the top left corner. The more the curve is closer to the right bottom corner, the worse the classifier. If the curve shoots up rapidly that means that by adjusting the treshold by a little bit, the true positive rate (or the amount of "positive" class observations identified correctly) is very high while the errors that our model makes are minimal (FRP is near zero). Further adjusting the treshold may increase the total positive class observations identified but it will come with a cost of increasing the FPR. To put everything in an interactive way, please watch the video by the great StatQuest team about ROC curves: https://www.youtube.com/watch?v=4jRBRDbJemM Another great resource on this topic: https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc AUC statistic The area under the curve (AUC) statistic is the integral of a given ROC curve between the points (0,0) and (1,1): ![auc-plots](media/auc-example.png)The perfect estimator has an are under the curve of 1.0, a bad estimator has the value of 0.5 and bellow. In practise, a classifier with the AUC statistic above 0.8 is consider to be good and AUC above 0.9 is considered to be very good. For the objective of creating an ML model for the winner of a CSGO match, we will use the AUC statistic as the main measure of the "goodness" of the model. Creating the train, validation and test sets When creating machine learning models it is very advised to split the data into a **train**, **validation** and **test** sets. A good general rule of thumb is to have ~80% of the data to train the algotithm, ~10% of the data to use in various parameter tuning and ~10% of the data to only use in final performance metric calculation. All of these datasets are needed to make sure that our model does not **overfit**. Overfitting problem As stated beautifully in the book "Introduction to Statistical Learning"{cite}`stat_learning`:**"When we overfit the training data, the test performance metrics will be very large because the supposed patterns that the method found in the training data simply don’t exist in the test data. Note that regardless of whether or not overfitting has occurred, we almost always expect the training errors to be smaller than the test errors because most statistical learning methods either directly or indirectly seek to minimize the training errors"**In other words, if we only use training data when creating ML models, we are blinded a bit and do not know how will the model perform with unseen data. As per {cite}`train_val_test`:**"The training set the largest corpus of your dataset that you reserve for training your model. After training, inference on these images will be taken with a grain of salt, since the model has already had a chance to look at and memorize the correct output."****"The validation set is a separate section of your dataset that you will use during training to get a sense of how well your model is doing on images that are not being used in training. During training, it is common to report validation metrics continually after each training epoch . You use these metrics to get a sense of when your model has hit the best performance it can reach on your validation set. You may choose to cease training at this point As you work on your model, you can continually iterate on your dataset, image augmentations, and model design to increase your model's performance on the validation set."****"After all of the training experiments have concluded, you probably have gotten a sense on how your model might do on the validation set. But it is important to remember that the validation set metrics may have influenced you during the creation of the model, and in this sense you might, as a designer, overfit the new model to the validation set. Because the validation set is heavily used in model creation, it is important to hold back a completely separate stronghold of data - the test set. You can run evaluation metrics on the test set at the very end of your project, to get a sense of how well your model will do in production."** Feature list After the feature engineering steps and EDA we can define the final feature list which we will use in our models: ###Code # Initial list features = [ 'bomb_planted', 'ct_health_share', 'ct_players_alive', 't_players_alive', 'ct_defuse_kit_present', 'ct_helmets', 't_helmets' ] ###Output _____no_output_____ ###Markdown **NOTE:** some of the features will be left out because of iterative inspection of model results and EDA. ###Code # Creating dummy vars for the map feature map_df = pd.get_dummies(d['map']) # Map feature names map_features = map_df.columns.values.tolist() # Concatenating the map_df to original dataframe d = pd.concat([d, map_df], axis=1) # Adding the map features to the original feature list #features += map_features # Creating dummy vars for the team_score_diff features score_df = pd.get_dummies(d['team_score_diff']) # Score feature names score_df.columns = [f"team_score_diff_in_{str(x)}" for x in score_df.columns] score_features = score_df.columns.values.tolist() # Concatenating the map_df to original dataframe d = pd.concat([d, score_df], axis=1) # Adding the map features to the original feature list #features += score_features print(f"""Final feature list: \n \n {features} \n \n Number of features: {len(features)}""") ###Output Final feature list: ['bomb_planted', 'ct_health_share', 'ct_players_alive', 't_players_alive', 'ct_defuse_kit_present', 'ct_helmets', 't_helmets'] Number of features: 7 ###Markdown Spliting the original dataset We will use 80% of the data to train the model, 10% for validating our model and search for hyper parameters and 10% of the data will be reserved for the test set. For reproducibility, we will set a random seed of **123**. ###Code # Setting the seed seed = 123 # Subseting the dataframe to the features needed + the target variable dsubset = d[features + ['Y']].copy() # Dropping missing values dsubset.dropna(inplace=True) # Reseting the indexz dsubset.reset_index(inplace=True, drop=True) # Spliting to train and test sets train, test = train_test_split(dsubset, test_size=0.2, random_state=seed) # Further spliting the test to test and validation sets test, val = train_test_split(test, test_size=0.5, random_state=seed) print(f"Total number of rows of the dataset: {d.shape[0]}") print(f"Rows in the train set: {train.shape[0]}") print(f"Rows in the validation set: {val.shape[0]}") print(f"Rows in the test set: {test.shape[0]}") ###Output Total number of rows of the dataset: 122409 Rows in the train set: 97926 Rows in the validation set: 12241 Rows in the test set: 12241 ###Markdown Creating the X and Y matrices ###Code # Final matrices for training and validating models train_X, train_Y = train[features], train['Y'] val_X, val_Y = val[features], val['Y'] test_X, test_Y = test[features], test['Y'] # Printing the stats about the distribution of Ys print(f"Share of CT wins in training: {np.sum(train_Y) / len(train_Y)}") print(f"Share of CT wins in validation: {np.sum(val_Y) / len(val_Y)}") print(f"Share of CT wins in testing: {np.sum(test_Y) / len(test_Y)}") ###Output Share of CT wins in training: 0.4892367706227151 Share of CT wins in validation: 0.494812515317376 Share of CT wins in testing: 0.4932603545461972 ###Markdown ML model creation Performance metric for a binary classifier As was stated in the introduction of this book, when creating an ML model we need to have a performance metric to see how the model is performing and to measure how it improves over time. One of the most popular choice is the **area under the curve (AUC)** metric. The metric calculates the plot bellow the **receiver operating curve (ROC)**. The ROC curve plots the true positive (TP) and the false positive (FP) rates using different tresholds. Logistic Regression model Logistic regression {cite}`100_page_ml` is used when we want to model the probability: $$P(\mathbb{Y}|\mathbb{X})$$The above probability reads as "the probability of $\mathbb{Y}$ given $\mathbb{X}$". In other words, how do the features in $\mathbb{X}$ influence the event of $\mathbb{Y}$? The full equation for the probability which we will be trying to fit to the given data is: $$ P(\mathbb{Y}|\mathbb{X}) = \dfrac{1}{1 + e^{-\mathbb{X} \beta }}$$Where $\mathbb{X}$ - a feature matrix of $n$ observations and $p$ features. $\beta$ - a vector of dimensions $p$ x 1. In other words, we get a coefficient for each feature in the $\mathbb{X}$ matrix. What is very helpful in logistic regression is that a negative coefficient for a given feature means that increasing the feature $x_{i}$ will lower the probability of a CT win. If a coefficient is positive, then increasing the $x_{i}$ value increases the probability of a CT win. This simple fact helps us in some quick sanity checks - the logic of the EDA analysis should hold in respect to coefficient values and signs. Fitting logistic regression Hyper parameter tuning We will try and find a combination of best hyper parameters from a given grid of parameters using the validation set. The full list of logistic regression HPs can be found here: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html?highlight=logistic%20regressionsklearn.linear_model.LogisticRegression ###Code # Defining a list of hyperparameters hp_dict = { 'C': [0.1, 0.5, 1, 1.5, 2], 'max_iter': [1000], 'fit_intercept': [True], 'solver': ['liblinear'], 'penalty': ['l1', 'l2'] } # Creating the hp grid hp_grid = ParameterGrid(hp_dict) # Placeholders for the iteration auc_val_best = 0 best_hp = {} results = pd.DataFrame({}) # Iterating through all the parameters and evaluating the results for i, hp in enumerate(hp_grid): # Initiating the empty model/classifier clf = LogisticRegression(**hp) # Fitting on data clf.fit(train_X, train_Y) # Predicting on the validation set yhat_val = [x[1] for x in clf.predict_proba(val_X)] # Calculating the AUC metric auc_val = roc_auc_score(val_Y, yhat_val) # Adding to the results frame hp_results = pd.DataFrame(hp, index=[i]) hp_results['auc'] = auc_val results = results.append(hp_results) # Checking if this is the highest auc if auc_val > auc_val_best: auc_val_best = auc_val best_hp = hp # Sorting by the AUC score results.sort_values('auc', ascending=False, inplace=True) # Printing out the results frame print(results) # Printing out the best hyper parameter dictionary print(f"Best hp: {best_hp}") ###Output Best hp: {'C': 2, 'fit_intercept': True, 'max_iter': 1000, 'penalty': 'l2', 'solver': 'liblinear'} ###Markdown Testing the results on the test set Now that we have the best parameters according to the validation set, we can checkout the performance on the test set. The test set should be used as the final performance evaluation before deciding whether the model is sufficient or not. ###Code # Creating the final logistic regression classifier clf_lr = LogisticRegression(**best_hp) clf_lr.fit(train_X, train_Y) # Getting the AUC statistics for train, validation and test sets yhat_train = [x[1] for x in clf_lr.predict_proba(train_X)] yhat_val = [x[1] for x in clf_lr.predict_proba(val_X)] yhat_test = [x[1] for x in clf_lr.predict_proba(test_X)] train_auc = round(roc_auc_score(train_Y, yhat_train), 4) val_auc = round(roc_auc_score(val_Y, yhat_val), 4) test_auc = round(roc_auc_score(test_Y, yhat_test), 4) # Creating a dataframe for ploting auc_lr_results = pd.DataFrame({ 'auc_lr': [train_auc, val_auc, test_auc] }, index=['train', 'validation', 'test']) # Ploting the results plt.figure(figsize=(10, 5)) plt.bar( auc_lr_results.index, auc_lr_results['auc_lr'], edgecolor='black' ) plt.title("AUC statistics") plt.ylabel("AUC value") plt.xlabel("Data type") plt.show() print(auc_lr_results) # Calculating the false positive rate and true positive rate for all the tresholds fpr_logistic, tpr_logistic, _ = roc_curve(test_Y, yhat_test) # Plot the roc curve for the model plt.figure(figsize=(9, 7)) plt.plot(fpr_logistic, tpr_logistic, marker='.', label='Logistic') plt.plot([0, 1], [0, 1], color = 'black', linewidth = 1, linestyle='--', label='Random guessing') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Logistic regression coefficients ###Code # Creating the coefficient frame coef_frame = pd.DataFrame({ 'feature': clf_lr.feature_names_in_.tolist() + ['intercept'], 'coefficient': clf_lr.coef_[0].tolist() + [clf_lr.intercept_[0]] }) # Sorting by coefficient value coef_frame.sort_values('coefficient', inplace=True) # Printing the coef frame print(coef_frame) plt.figure(figsize=(10, 7)) plt.barh(coef_frame['feature'], coef_frame['coefficient'], edgecolor='black') plt.xlabel('Coefficient value') plt.ylabel('Feature') plt.title("Logistic regression coefficients") plt.show() ###Output _____no_output_____ ###Markdown As we can see, all the coefficient values have the same logic as in the EDA phase. This proves that the feature engineering we have done and the model fitting phase was correct. Xgboost model Xgboost stands for extreme gradient boosting. It is not as interpratable as logistic regression but is much more flexible and in practise has proven time and time again that it gives better results than logistic regression. Hyperparameter tuning for xgboost ###Code # Defining a list of hyperparameters hp_dict = { 'n_estimators': [60, 90, 120], 'max_depth': [4, 6, 8], 'eval_metric': ['logloss'], 'use_label_encoder': [False] } # Creating the hp grid hp_grid = ParameterGrid(hp_dict) # Placeholders for the iteration auc_val_best = 0 best_hp = {} results = pd.DataFrame({}) # Iterating through all the parameters and evaluating the results for i, hp in enumerate(hp_grid): # Initiating the empty model/classifier clf = xgb.XGBClassifier(**hp) # Fitting on data clf.fit(train_X, train_Y) # Predicting on the validation set yhat_val = [x[1] for x in clf.predict_proba(val_X)] # Calculating the AUC metric auc_val = roc_auc_score(val_Y, yhat_val) # Adding to the results frame hp_results = pd.DataFrame(hp, index=[i]) hp_results['auc'] = auc_val results = results.append(hp_results) # Checking if this is the highest auc if auc_val > auc_val_best: auc_val_best = auc_val best_hp = hp # Sorting by the AUC score results.sort_values('auc', ascending=False, inplace=True) # Printing out the results frame print(results) # Printing out the best hyper parameter dictionary print(f"Best hp: {best_hp}") ###Output Best hp: {'eval_metric': 'logloss', 'max_depth': 8, 'n_estimators': 120, 'use_label_encoder': False} ###Markdown XGB auc statistic on each of the data sets ###Code clf_xgb = xgb.XGBClassifier(**best_hp) clf_xgb.fit(train_X, train_Y) # Getting the AUC statistics for train, validation and test sets yhat_train = [x[1] for x in clf_xgb.predict_proba(train_X)] yhat_val = [x[1] for x in clf_xgb.predict_proba(val_X)] yhat_test = [x[1] for x in clf_xgb.predict_proba(test_X)] train_auc = round(roc_auc_score(train_Y, yhat_train), 4) val_auc = round(roc_auc_score(val_Y, yhat_val), 4) test_auc = round(roc_auc_score(test_Y, yhat_test), 4) # Creating a dataframe for ploting auc_xgb_results = pd.DataFrame({ 'auc_xgb': [train_auc, val_auc, test_auc] }, index=['train', 'validation', 'test']) # Ploting the results plt.figure(figsize=(10, 5)) plt.bar( auc_xgb_results.index, auc_xgb_results['auc_xgb'], edgecolor='black' ) plt.title("AUC statistics") plt.ylabel("AUC value") plt.xlabel("Data type") plt.show() print(auc_xgb_results) # Calculating the false positive rate and true positive rate for all the tresholds fpr_xgb, tpr_xgb, _ = roc_curve(test_Y, yhat_test) # Plot the roc curve for the model plt.figure(figsize=(9, 7)) plt.plot(fpr_logistic, tpr_logistic, marker='.', label='Logistic') plt.plot(fpr_xgb, tpr_xgb, marker='.', label='Xgboost') plt.plot([0, 1], [0, 1], color = 'black', linewidth = 1, linestyle='--', label='Random guessing') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown All the metrics, both in the validation set and test set, are better for xgboost. Thus, we will save the model and use it for later. ###Code # Creating a directory to save the model objects _dir_name = os.path.join("..", "ml_models") if not os.path.exists(_dir_name): os.mkdir(_dir_name) _model_path_xgb = os.path.join(_dir_name, "ml-model-xgb.pkl") _model_path_lr = os.path.join(_dir_name, "ml-model-lr.pkl") # Saving the model pickle.dump(clf_xgb, open(_model_path_xgb, "wb")) pickle.dump(clf_lr, open(_model_path_lr, "wb")) # Saving the features for future use _feature_path = os.path.join(_dir_name, "ml-features.json") with open(_feature_path, 'w') as f: json.dump(train_X.dtypes.astype(str).to_dict(), f) ###Output _____no_output_____
clone_portal_users_groups_and_content.ipynb
###Markdown Clone Portal users, groups and contentThis sample notebook can be used for cloning a portal, from say, a staging to a production environment. It clones the users, groups and the content. It does not copy over services though, and works at the tier of portal items.**Note**: To user this notebook as a Python script, checkout the accompanying [SDK GitHub](https://github.com/Esri/arcgis-python-api) repository. Running this as a script from a Python IDE allows you to set breakpoints, debug and inspect the script when an exception is raised. ###Code from arcgis.gis import GIS from IPython.display import display ###Output _____no_output_____ ###Markdown Define the source and target portalsTo start with, define the source and target portals. Connect to them using accounts with administrative privileges: ###Code source = GIS("https://ec2-52-53-176-137.us-west-1.compute.amazonaws.com/portal", "James_Jones", "changeme0!", verify_cert=False) target = GIS("https://usgspod.esri.com/portal", "portaladmin", "gis12345") target_admin_username = 'portaladmin' ###Output _____no_output_____ ###Markdown UsersList the users in the source and target portals. We do not want to copy over system accounts since those would be available in the target portal as well. Hence, filter the search by negating any account that starts with 'esri_'. We also do not want to copy over the [initial administrator account](http://server.arcgis.com/en/portal/latest/administer/linux/about-the-initial-administrator-account.htm) as one would be present in the target as well. Hence, negate the account that starts with `admin` which happens to be the administrator account on source portal. ###Code #!esri_ & !admin source_users = source.users.search('!esri_ & !admin') for user in source_users: print(user.username + "\t:\t" + str(user.role)) ###Output astauffer : org_user [email protected] : org_admin cdufault : org_admin [email protected] : org_admin cloveman : org_admin daryl_smith : org_publisher dcribbs : org_publisher eguido : org_user emccartney : org_publisher gdmatthews : org_publisher geor5599 : org_admin hlestinsky : org_user James_Jones : org_admin jawamboldt : org_user [email protected] : org_user jjkosovich : org_publisher jmirmelstein : org_admin jmoore : org_admin [email protected] : org_admin jose6588 : org_admin [email protected] : org_admin jproctor : org_user jxornelas : org_user kafishburn : org_user kcraun : org_publisher kgallagher : org_user KHocutt : org_admin lhansmann : org_user lmoore : org_user lrdavis : org_user [email protected] : org_publisher mberra : org_admin mgabriel : org_user mtischler : org_publisher NMayer : org_admin [email protected] : org_user [email protected] : org_admin rdollison : org_publisher rpostolovski : org_user [email protected] : org_publisher sbankston : org_user sboyer : org_user [email protected] : org_publisher Stephen.Zahniser : org_user sudhirshrestha : org_user swebinger : org_user syeleswarapu : org_user tlauver : org_publisher wmarken : org_user ###Markdown Get the number of users to migrate: ###Code len(source_users) ###Output _____no_output_____ ###Markdown Get the list of users already present in the target portal. Similar to earlier, filter out system and initial administrator accounts. The name of the admin account on target portal is `admin` as well in this example. ###Code # filter out system and initial administrator accounts target_users = target.users.search('!esri_ & !admin & !system_publisher') target_users ###Output _____no_output_____ ###Markdown If users found on source portal were already in the target portal, run the following code to delete them. You can choose to not delete them as well. Remove existing users from target portalIf you want to clean up the target portal except for the initial administrator account, run the cell below. As you delete, you may opt to assign their content to the initial administrator account. ###Code for source_user in source_users: try: target_user = target.users.get(source_user.username) if target_user is not None: print('Deleting user: ' + target_user.fullName) target_user.reassign_to(target_admin_username) target_user.delete() except: print('User {} does not exist in Target Portal'.format(source_user.username)) ###Output _____no_output_____ ###Markdown Copy UsersCreate a function that will accept connection to the target portal, `User` objects from source portal and password to create users with. In addition to creating the users, this function will set their access, description, tags and other similar properties from source. If a user by the same name already exists in the target portal (possible if you opted not to clean out the target portal) then this function prints out an error message. ###Code def copy_user(target_portal, source_user, password): # See if the user has firstName and lastName properties try: first_name = source_user.firstName last_name = source_user.lastName except: # if not, split the fullName full_name = source_user.fullName first_name = full_name.split()[0] try: last_name = full_name.split()[1] except: last_name = 'NoLastName' try: # create user target_user = target_portal.users.create(source_user.username, password, first_name, last_name, source_user.email, source_user.description, source_user.role) # update user properties target_user.update(source_user.access, source_user.preferredView, source_user.description, source_user.tags, source_user.get_thumbnail_link(), culture=source_user.culture, region=source_user.region) return target_user except Exception as Ex: print(str(Ex)) print("Unable to create user "+ source_user.username) return None ###Output _____no_output_____ ###Markdown For each user in source portal, make a corresponding user in target portal. In this sample, we provide a common password to all users `TestPassword@123` as we are creating users off the built-in identity store. If you are creating users off your enterprise identity store, you can ignore the `password` parameter and use the `provider` and `idp_username` parameters as explained in the [API reference doc](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.htmlarcgis.gis.UserManager.create). ###Code for user in source_users: print("Creating user: " + user.username) copy_user(target, user, 'TestPassword@123') ###Output Creating user: astauffer Creating user: [email protected] Creating user: cdufault Creating user: [email protected] Creating user: cloveman Creating user: daryl_smith Creating user: dcribbs Creating user: eguido Creating user: emccartney Creating user: gdmatthews Creating user: geor5599 Creating user: hlestinsky Creating user: James_Jones Creating user: jawamboldt Creating user: [email protected] Creating user: jjkosovich Creating user: jmirmelstein Creating user: jmoore Creating user: [email protected] Creating user: jose6588 Creating user: [email protected] Creating user: jproctor Creating user: jxornelas Creating user: kafishburn Creating user: kcraun Creating user: kgallagher Creating user: KHocutt Creating user: lhansmann Creating user: lmoore Creating user: lrdavis Creating user: [email protected] Creating user: mberra Creating user: mgabriel Creating user: mtischler Creating user: NMayer Creating user: [email protected] Creating user: [email protected] Creating user: rdollison Creating user: rpostolovski Creating user: [email protected] Creating user: sbankston Creating user: sboyer Creating user: [email protected] Creating user: Stephen.Zahniser Creating user: sudhirshrestha Creating user: swebinger Creating user: syeleswarapu Creating user: tlauver Creating user: wmarken ###Markdown Verify that users have been added to target portal: ###Code target_users = target.users.search() target_users ###Output _____no_output_____ ###Markdown Thus, users have been successfully added to the target portal Groups List the groups in the source and target portals. Similar to how we searched for users, we will ignore the system created and default groups as they would be available on the target portal as well. ###Code # filter out system created groups source_groups = source.groups.search("!owner:esri_* & !Basemaps") source_groups target_groups = target.groups.search("!owner:esri_* & !Basemaps") target_groups ###Output _____no_output_____ ###Markdown If any of the groups from source are already in the target, run the following code to delete them. If the group belongs to any of default user accounts, don't delete it. This step is optional, you may choose to not delete those groups if you prefer to retain them as is. ###Code for tg in target_groups: for sg in source_groups: if sg.title == tg.title and (not tg.owner.startswith('esri_')): print("Cleaning up group {} in target Portal...".format(tg.title)) tg.delete() break ###Output Cleaning up group Featured Maps and Apps in target Portal... ###Markdown Copy GroupsLet us create a function that will clone the groups one at a time. As you call this function in a loop for each group, it reads the source group's properties, downloads thumbnail into a temporary file then creates a similar named group on target and applies those properties and thumbnail. If one of your portals is an organization on ArcGIS Online and other is an ArcGIS Enterprise, certain privacy properties need to be adapted. This function takes care of that. After creating the group, it finds which users were members of it and adds them appropriately. ###Code import tempfile GROUP_COPY_PROPERTIES = ['title', 'description', 'tags', 'snippet', 'phone', 'access', 'isInvitationOnly'] def copy_group(target, source, source_group): with tempfile.TemporaryDirectory() as temp_dir: try: target_group = {} for property_name in GROUP_COPY_PROPERTIES: target_group[property_name] = source_group[property_name] if source_group['access'] == 'org' and target.properties['portalMode'] == 'singletenant': #cloning from ArcGIS Online to ArcGIS Enterprise target_group['access'] = 'public' elif source_group['access'] == 'public'\ and source.properties['portalMode'] == 'singletenant'\ and target.properties['portalMode'] == 'multitenant'\ and 'id' in target.properties: #cloning from ArcGIS Enterprise to ArcGIS Online org target_group['access'] = 'org' # Download the thumbnail (if one exists) thumbnail_file = None if 'thumbnail' in group: target_group['thumbnail'] = group.download_thumbnail(temp_dir) # Create the group in the target portal copied_group = target.groups.create_from_dict(target_group) # Reassign all groups to correct owners, add users, and find shared items members = group.get_members() if not members['owner'] == target_admin_username: copied_group.reassign_to(target_admin_username) return copied_group except: print("Error creating " + source_group['title']) ###Output _____no_output_____ ###Markdown For each group in source portal, make a corresponding group in target portal. ###Code from IPython.display import display for group in source_groups: target_group = copy_group(target, source, group) if target_group: display(target_group) ###Output Unable to reassign group. You already have a group named 'Data'. Try a different name. ###Markdown As you can see, we were able to add the groups with their thumbnails. Now let us verify that groups can be listed on the target portal: ###Code target_groups = target.groups.search() target_groups ###Output _____no_output_____ ###Markdown With this part of the sample, we have successfully created users, groups and added the appropriate users to these groups. Thus, you can call the `get_members()` method one of the groups to view its members: ###Code group1 = target_groups[0] group1.get_members() ###Output _____no_output_____ ###Markdown Items Copying items consists of multiple steps as explained in the following section of the sample: 1. [For each user create a mapping of itemId to the `Item`](For-each-user-create-a-mapping-of-itemId-to-the-Item) 2. [Prepare sharing information for each item](Prepare-sharing-information-for-each-item) 1. [Print a mapping of item and its group membership](Print-a-mapping-of-item-and-its-group-membership) 3. [Copy items one by one](Copy-Items) 4. [Establish relationship between items](establish-relationship-between-items) For each user create a mapping of itemId to the `Item`Do this for every folder in the user's account on the source portal ###Code source_items_by_id = {} for user in source_users: num_items = 0 num_folders = 0 print("Collecting item ids for {}".format(user.username), end="\t\t") user_content = user.items() # Get item ids from root folder first for item in user_content: num_items += 1 source_items_by_id[item.itemid] = item # Get item ids from each of the folders next folders = user.folders for folder in folders: num_folders += 1 folder_items = user.items(folder=folder['title']) for item in folder_items: num_items += 1 source_items_by_id[item.itemid] = item print("Number of folders {} # Number of items {}".format(str(num_folders), str(num_items))) ###Output _____no_output_____ ###Markdown Let us print the dictionary of `{item_id : Item object}` ###Code source_items_by_id ###Output _____no_output_____ ###Markdown Prepare sharing information for each itemUsing the dictionary we created above, find to which groups are each of the items shared to. ###Code for group in source_groups: #iterate through each item shared to the source group for group_item in group.content(): try: #get the item item = source_items_by_id[group_item.itemid] if item is not None: if not 'groups'in item: item['groups'] = [] #assign the target portal's corresponding group's name item['groups'].append(group['title']) except: print("Cannot find item : " + group_item.itemid) ###Output _____no_output_____ ###Markdown Print a mapping of item and its group membership ###Code for key in source_items_by_id.keys(): item = source_items_by_id[key] print("\n{:40s}".format(item.title), end = " # ") if 'groups' in item: print(item.access, end = " # ") print(item.groups, end = "") ###Output _____no_output_____ ###Markdown As we can see from above, some items are shared to a few groups while some are not. Copy ItemsBelow we define a function that you can call in a loop for each item in the dictionary we composed earlier. If the item is a text based item such as a Web Map or a file based item such as a layer package, it downloads the item's data to a temporary directory and uses that for creating the target item during cloning. You can find the [exhaustive list of different items](http://doc.arcgis.com/en/arcgis-online/reference/supported-items.htm) that you can upload to your portal and their corresponding item types from the [REST API documentation](http://resources.arcgis.com/en/help/arcgis-rest-api/index.html/Items_and_item_types/02r3000000ms000000/). For brevity, this sample covers only a subset of those items. Note, if the item points to a web layer URL, the target item would also point to the same URL. ###Code TEXT_BASED_ITEM_TYPES = frozenset(['Web Map', 'Feature Service', 'Map Service','Web Scene', 'Image Service', 'Feature Collection', 'Feature Collection Template', 'Web Mapping Application', 'Mobile Application', 'Symbol Set', 'Color Set', 'Windows Viewer Configuration']) FILE_BASED_ITEM_TYPES = frozenset(['File Geodatabase','CSV', 'Image', 'KML', 'Locator Package', 'Map Document', 'Shapefile', 'Microsoft Word', 'PDF', 'Microsoft Powerpoint', 'Microsoft Excel', 'Layer Package', 'Mobile Map Package', 'Geoprocessing Package', 'Scene Package', 'Tile Package', 'Vector Tile Package']) ITEM_COPY_PROPERTIES = ['title', 'type', 'typeKeywords', 'description', 'tags', 'snippet', 'extent', 'spatialReference', 'name', 'accessInformation', 'licenseInfo', 'culture', 'url'] ###Output _____no_output_____ ###Markdown We define the copy function for items below. This function gets the properties of the item from source and applies it to the target. If the items were saved inside a folder, it creates that folder on the target as well. Finally, it sets the privacy (sharing) properties similar to how it was on the source portal. ###Code def copy_item(target, source_item): try: if True: temp_dir = r'/Users/jame9353/Documents/temp_data/portal_copy' item_properties = {} for property_name in ITEM_COPY_PROPERTIES: item_properties[property_name] = source_item[property_name] data_file = None if source_item.type in TEXT_BASED_ITEM_TYPES: # If its a text-based item, then read the text and add it to the request. text = source_item.get_data(False) item_properties['text'] = text elif source_item.type in FILE_BASED_ITEM_TYPES: print(source_item.type) # download data and add to the request as a file #data_file = source_item.download(temp_dir) #thumbnail_file = source_item.download_thumbnail(temp_dir) #metadata_file = source_item.download_metadata(temp_dir) """ #find item's owner source_item_owner = source.users.search(source_item.owner)[0] #find item's folder item_folder_titles = [f['title'] for f in source_item_owner.folders if f['id'] == source_item.ownerFolder] folder_name = None if len(item_folder_titles) > 0: folder_name = item_folder_titles[0] #if folder does not exist for target user, create it if folder_name: target_user = target.users.search(source_item.owner)[0] target_user_folders = [f['title'] for f in target_user.folders if f['title'] == folder_name] if len(target_user_folders) == 0: #create the folder target.content.create_folder(folder_name, source_item.owner) # Add the item to the target portal, assign owner and folder target_item = target.content.add(item_properties, data_file, thumbnail_file, metadata_file, source_item.owner, folder_name) #Set sharing (privacy) information share_everyone = source_item.access == 'public' share_org = source_item.access in ['org', 'public'] share_groups = [] if source_item.access == 'shared': share_groups = source_item.groups target_item.share(share_everyone, share_org, share_groups) return target_item """ except Exception as copy_ex: print("\tError copying " + source_item.title) print("\t" + str(copy_ex)) return None ###Output _____no_output_____ ###Markdown Copy over each item. While doing so, construct a dictionary mapping of source item's ID with target item's ID ###Code source_target_itemId_map = {} for key in source_items_by_id.keys(): source_item = source_items_by_id[key] print(source_item.type) #print("Copying {} \tfor\t {}".format(source_item.title, source_item.owner)) target_item = copy_item(target, source_item) if target_item: source_target_itemId_map[key] = target_item.itemid else: source_target_itemId_map[key] = None ###Output _____no_output_____ ###Markdown We have successfully cloned all the items from source to target. We can query the contents of one of the users below to verify: ###Code user1 = target.users.search()[10] user1 user1.items() ###Output _____no_output_____ ###Markdown We could query the folders belonging to this user and the items within as well ###Code user1.folders user1.items(folder=user1.folders[0]['title']) ###Output _____no_output_____ ###Markdown Establish relationship between itemsSo far, we have successfully cloned users, groups and items from source to target. Next, we will establish identical [relationships](http://resources.arcgis.com/en/help/arcgis-rest-api/index.html/Relationship_types/02r3000000mm000000/) between items as they were in the source portal. ###Code RELATIONSHIP_TYPES = frozenset(['Map2Service', 'WMA2Code', 'Map2FeatureCollection', 'MobileApp2Code', 'Service2Data', 'Service2Service']) ###Output _____no_output_____ ###Markdown Below, we loop through each item in source portal, find to which other item it is related and the type of that relationship. If a relationship is found, we find the corresponding items in target and establish the same relationship. To make this work, we will make use of the dictionary that maps the itemIds on source and target we created during the item clone stage. Let us take a look at that dictionary below: ###Code source_target_itemId_map for key in source_target_itemId_map.keys(): source_item = source_items_by_id[key] target_itemid = source_target_itemId_map[key] target_item = target.content.get(target_itemid) print(source_item.title + " # " + source_item.type) for relationship in RELATIONSHIP_TYPES: try: source_related_items = source_item.related_items(relationship) for source_related_item in source_related_items: print("\t\t" + source_related_item.title + " # " + source_related_item.type +"\t## " + relationship) #establish same relationship amongst target items print("\t\t" + "establishing relationship in target portal", end=" ") target_related_itemid = source_target_itemId_map[source_related_item.itemid] target_related_item = target.content.get(target_related_itemid) status = target_item.add_relationship(target_related_item, relationship) print(str(status)) except Exception as rel_ex: print("\t\t Error when checking for " + relationship + " : " + str(rel_ex)) continue ###Output _____no_output_____
2_Dataset_Specdiff_Extraction.ipynb
###Markdown Loading Images in Dataset ###Code import os path_live = "/content/drive/MyDrive/MBKM RISET/RISET_1/SpecDiff_in_house_database_sample/data/live" path_spoof = "/content/drive/MyDrive/MBKM RISET/RISET_1/SpecDiff_in_house_database_sample/data/spoof" def load_images(path): image_path_flash = [] image_path_background = [] for file_id in os.listdir(path): id = os.path.join(path,file_id) counter = True for filename in os.listdir(id): if counter == True: image_path_background.append(os.path.join(id, filename)) counter = False elif counter == False: image_path_flash.append(os.path.join(id, filename)) counter = True return image_path_flash,image_path_background live_flash,live_bg = load_images(path_live) spoof_flash,spoof_bg = load_images(path_spoof) sum_live = len(live_flash)+len(live_bg) sum_spoof = len(spoof_flash)+len(spoof_bg) print("Jumlah Pasangan Gambar Live : " + str(sum_live/2)) print("Jumlah Pasangan Gambar Spoof : " + str(sum_spoof/2)) print(live_flash[0:10]) print(live_bg[0:10]) print(spoof_flash[0:10]) print(spoof_bg[0:10]) ###Output _____no_output_____ ###Markdown Library ###Code !wget --no-check-certificate \ https://raw.githubusercontent.com/italojs/facial-landmarks-recognition/master/shape_predictor_68_face_landmarks.dat \ -O shape_predictor_68_face_landmarks.dat from imutils import face_utils import imutils import numpy as np import collections import dlib import cv2 %matplotlib inline from matplotlib import pyplot as plt import pylab pylab.rcParams['figure.figsize'] = (10.0, 8.0) detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat') ###Output --2021-10-29 06:55:56-- https://raw.githubusercontent.com/italojs/facial-landmarks-recognition/master/shape_predictor_68_face_landmarks.dat Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 99693937 (95M) [application/octet-stream] Saving to: ‘shape_predictor_68_face_landmarks.dat’ shape_predictor_68_ 100%[===================>] 95.08M 60.8MB/s in 1.6s 2021-10-29 06:56:01 (60.8 MB/s) - ‘shape_predictor_68_face_landmarks.dat’ saved [99693937/99693937] ###Markdown Function ###Code base_path = "/content/drive/MyDrive/MBKM RISET/" def cropping(image): try: rect = detector(image)[0] except (ValueError,IndexError): print("Not Found Face!!") return image sp = predictor(image, rect) landmarks = np.array([[p.x, p.y] for p in sp.parts()]) x = [] y_alis = [] y = [] w = [] h = [] x.append(landmarks[1][0]) y_alis.append(landmarks[17][1]) y_alis.append(landmarks[18][1]) y_alis.append(landmarks[23][1]) y_alis.append(landmarks[24][1]) w.append(landmarks[15][0]) h.append(landmarks[8][1]) y.append(min(y_alis)) crop_img = image[y[0]:h[0], x[0]:w[0]] return crop_img def preprocessing (path,sig,size_x,size_y): #face_pre = [] #face = [] global face_pre,crop_img base_image = cv2.imread(path) if(base_image is not None): grey = cv2.cvtColor(base_image, cv2.COLOR_BGR2GRAY) #for (x, y, w, h) in faces: #crop_img = grey[y:y + h, x:x + w] #crop_img = grey[faces[0,1]:faces[0,1]+faces[0,3], faces[0,0]:faces[0,0]+faces[0,3]] crop_img = cropping(grey) face_pre = cv2.GaussianBlur(crop_img, ksize=(0, 0), sigmaX=sig, borderType=cv2.BORDER_REPLICATE) face = np.double(cv2.resize(np.array(face_pre),(size_x,size_y))) return face def feature(flash,background): a = flash - background b = flash + background c = a/b trans = np.transpose(c) feat_vec = np.reshape(trans, (1, trans.size)) feat_vec = np.nan_to_num(feat_vec) return feat_vec def diffuse_extract(output,path_flash,path_bg): output_folder = output if not os.path.exists(output_folder): os.makedirs(output_folder) for i in range(len(path_flash)): #range(20): #range(len(path_flash)): flash = preprocessing(path_flash[i],5,100,100) background = preprocessing(path_bg[i],5,100,100) pantulan = feature(flash,background) name = os.path.join(base_path,output_folder,str(i)+'.JPG') print(name) plt.imsave(name, pantulan, cmap='gray') def save_images(path_flash,path_bg,file_name): feature_label = [] for i in range(len(path_flash)): flash = preprocessing(path_flash[i],5,100,100) background = preprocessing(path_bg[i],5,100,100) pantulan = feature(flash,background) print(path_flash[i]) label = 0 feature_label.append(np.append(pantulan,np.array(label))) np.save(file_name,np.array(feature_label)) ###Output _____no_output_____ ###Markdown Build .npy dataset file ###Code save_images(live_flash,live_bg,"live.npy") save_images(spoof_flash,spoof_bg,"spoof.npy") ###Output _____no_output_____
EHR_Claims/Lasso/.ipynb_checkpoints/EHR_C_Death_No-checkpoint.ipynb
###Markdown Template LR ###Code def lr(X_train, y_train): from sklearn.linear_model import Lasso from sklearn.decomposition import PCA from sklearn.linear_model import LogisticRegression from sklearn.model_selection import GridSearchCV from imblearn.over_sampling import SMOTE from sklearn.preprocessing import StandardScaler model = LogisticRegression(penalty = 'l1', solver = 'liblinear') param_grid = [ {'C' : np.logspace(-4, 4, 20)} ] clf = GridSearchCV(model, param_grid, cv = 5, verbose = True, n_jobs = -1) best_clf = clf.fit(X_train, y_train) return best_clf def train_scores(X_train,y_train): from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import fbeta_score from sklearn.metrics import roc_auc_score from sklearn.metrics import log_loss pred = best_clf.predict(X_train) actual = y_train print(accuracy_score(actual,pred)) print(f1_score(actual,pred)) print(fbeta_score(actual,pred, average = 'macro', beta = 2)) print(roc_auc_score(actual, best_clf.decision_function(X_train))) print(log_loss(actual,pred)) def test_scores(X_test,y_test): from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import fbeta_score from sklearn.metrics import roc_auc_score from sklearn.metrics import log_loss pred = best_clf.predict(X_test) actual = y_test print(accuracy_score(actual,pred)) print(f1_score(actual,pred)) print(fbeta_score(actual,pred, average = 'macro', beta = 2)) print(roc_auc_score(actual, best_clf.decision_function(X_test))) print(log_loss(actual,pred)) ###Output _____no_output_____ ###Markdown General Population ###Code best_clf = lr(co_train_gpop, out_train_death_gpop) train_scores(co_train_gpop, out_train_death_gpop) print() test_scores(co_validation_gpop, out_validation_death_gpop) comb = [] for i in range(len(predictor_variable_claims)): comb.append(predictor_variable_claims[i] + str(best_clf.best_estimator_.coef_[:,i:i+1])) comb ###Output Fitting 5 folds for each of 20 candidates, totalling 100 fits ###Markdown High Continuity ###Code best_clf = lr(co_train_high, out_train_death_high) train_scores(co_train_high, out_train_death_high) print() test_scores(co_validation_high, out_validation_death_high) comb = [] for i in range(len(predictor_variable_claims)): comb.append(predictor_variable_claims[i] + str(best_clf.best_estimator_.coef_[:,i:i+1])) comb ###Output Fitting 5 folds for each of 20 candidates, totalling 100 fits 0.933130517776187 0.17995610826627653 0.5548729795317935 0.8574451785672651 2.309594386928156 0.928795155564836 0.17233009708737862 0.5524366848564672 0.8384142108895252 2.4593354631766045 ###Markdown Low Continuity ###Code best_clf = lr(co_train_low, out_train_death_low) train_scores(co_train_low, out_train_death_low) print() test_scores(co_validation_low, out_validation_death_low) comb = [] for i in range(len(predictor_variable_claims)): comb.append(predictor_variable_claims[i] + str(best_clf.best_estimator_.coef_[:,i:i+1])) comb ###Output Fitting 5 folds for each of 20 candidates, totalling 100 fits 0.8872355567056996 0.30307876849260296 0.5989330529108621 0.8419743863941891 3.894759342153146 0.8706814277533881 0.28192898781134074 0.5872545338400399 0.8145427286004835 4.466521428684785
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Updating a Model).ipynb
###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output _____no_output_____ ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output _____no_output_____ ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output _____no_output_____ ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. ###Output _____no_output_____ ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None ###Output _____no_output_____ ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output _____no_output_____ ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output _____no_output_____ ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output _____no_output_____ ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-10-09 19:03:44-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 24.8MB/s in 3.7s 2020-10-09 19:03:48 (21.8 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output _____no_output_____ ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer # from sklearn.externals import joblib import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output _____no_output_____ ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output _____no_output_____ ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. ###Output _____no_output_____ ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None ###Output _____no_output_____ ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output _____no_output_____ ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output _____no_output_____ ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output _____no_output_____ ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Wrote preprocessed data to cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-05-05 15:20:12 Starting - Starting the training job... 2020-05-05 15:20:14 Starting - Launching requested ML instances...... 2020-05-05 15:21:20 Starting - Preparing the instances for training...... 2020-05-05 15:22:17 Downloading - Downloading input data... 2020-05-05 15:23:09 Training - Training image download completed. Training in progress..Arguments: train [2020-05-05:15:23:09:INFO] Running standalone xgboost training. [2020-05-05:15:23:09:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8505.54mb [2020-05-05:15:23:09:INFO] Determined delimiter of CSV input is ',' [15:23:09] S3DistributionType set as FullyReplicated [15:23:11] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-05-05:15:23:11:INFO] Determined delimiter of CSV input is ',' [15:23:11] S3DistributionType set as FullyReplicated [15:23:12] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [15:23:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [0]#011train-error:0.2944#011validation-error:0.3023 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [15:23:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [1]#011train-error:0.2732#011validation-error:0.2813 [15:23:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [2]#011train-error:0.274067#011validation-error:0.2791 [15:23:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.2686#011validation-error:0.2734 [15:23:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.26#011validation-error:0.2697 [15:23:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.252867#011validation-error:0.2645 [15:23:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 0 pruned nodes, max_depth=5 [6]#011train-error:0.244533#011validation-error:0.2586 [15:23:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 0 pruned nodes, max_depth=5 [7]#011train-error:0.2432#011validation-error:0.254 [15:23:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 0 pruned nodes, max_depth=5 [8]#011train-error:0.229133#011validation-error:0.242 [15:23:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [9]#011train-error:0.219933#011validation-error:0.2364 [15:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [10]#011train-error:0.218067#011validation-error:0.2352 [15:23:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [11]#011train-error:0.215933#011validation-error:0.2311 [15:23:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [12]#011train-error:0.213267#011validation-error:0.2305 [15:23:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5 [13]#011train-error:0.208067#011validation-error:0.2269 [15:23:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [14]#011train-error:0.202#011validation-error:0.2211 [15:23:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [15]#011train-error:0.199867#011validation-error:0.2188 [15:23:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [16]#011train-error:0.197333#011validation-error:0.2167 [15:23:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [17]#011train-error:0.194133#011validation-error:0.2156 [15:23:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [18]#011train-error:0.191667#011validation-error:0.2096 [15:23:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [19]#011train-error:0.189667#011validation-error:0.2064 [15:23:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [20]#011train-error:0.188067#011validation-error:0.2041 [15:23:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.183667#011validation-error:0.2013 [15:23:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.182#011validation-error:0.2006 [15:23:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [23]#011train-error:0.178733#011validation-error:0.2 [15:23:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.175867#011validation-error:0.1969 [15:23:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 18 pruned nodes, max_depth=5 [25]#011train-error:0.1734#011validation-error:0.1941 [15:23:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [26]#011train-error:0.172067#011validation-error:0.1935 [15:23:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [27]#011train-error:0.1708#011validation-error:0.1926 [15:23:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [28]#011train-error:0.167667#011validation-error:0.1903 [15:23:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [29]#011train-error:0.166#011validation-error:0.1896 [15:23:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [30]#011train-error:0.1644#011validation-error:0.188 [15:23:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.163867#011validation-error:0.1854 [15:23:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.162333#011validation-error:0.1856 [15:23:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [33]#011train-error:0.161067#011validation-error:0.1855 [15:23:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5 [34]#011train-error:0.159067#011validation-error:0.1838 [15:24:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [35]#011train-error:0.157533#011validation-error:0.184 [15:24:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [36]#011train-error:0.156867#011validation-error:0.1828 [15:24:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [37]#011train-error:0.1538#011validation-error:0.1811 [15:24:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.153267#011validation-error:0.179 [15:24:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.151733#011validation-error:0.1786 [15:24:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [40]#011train-error:0.151133#011validation-error:0.1772 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .........................Arguments: serve [2020-05-05 15:35:33 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-05 15:35:33 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-05 15:35:33 +0000] [1] [INFO] Using worker: gevent [2020-05-05 15:35:33 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-05 15:35:34 +0000] [41] [INFO] Booting worker with pid: 41 [2020-05-05 15:35:34 +0000] [42] [INFO] Booting worker with pid: 42 [2020-05-05:15:35:34:INFO] Model loaded successfully for worker : 40 [2020-05-05:15:35:34:INFO] Model loaded successfully for worker : 41 [2020-05-05 15:35:34 +0000] [43] [INFO] Booting worker with pid: 43 [2020-05-05:15:35:34:INFO] Model loaded successfully for worker : 42 [2020-05-05:15:35:34:INFO] Model loaded successfully for worker : 43 [2020-05-05:15:35:57:INFO] Sniff delimiter as ',' [2020-05-05:15:35:57:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:35:57:INFO] Sniff delimiter as ',' [2020-05-05:15:35:57:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:35:57:INFO] Sniff delimiter as ',' [2020-05-05:15:35:57:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:35:57:INFO] Sniff delimiter as ',' [2020-05-05:15:35:57:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:35:57:INFO] Sniff delimiter as ',' [2020-05-05:15:35:57:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:35:57:INFO] Sniff delimiter as ',' [2020-05-05:15:35:57:INFO] Determined delimiter of CSV input is ',' 2020-05-05T15:35:54.243:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-05:15:35:58:INFO] Sniff delimiter as ',' [2020-05-05:15:35:58:INFO] Sniff delimiter as ',' [2020-05-05:15:35:58:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:35:58:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:35:59:INFO] Sniff delimiter as ',' [2020-05-05:15:35:59:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:35:59:INFO] Sniff delimiter as ',' [2020-05-05:15:35:59:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:35:59:INFO] Sniff delimiter as ',' [2020-05-05:15:35:59:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:35:59:INFO] Sniff delimiter as ',' [2020-05-05:15:35:59:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:35:59:INFO] Sniff delimiter as ',' [2020-05-05:15:35:59:INFO] Sniff delimiter as ',' [2020-05-05:15:35:59:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:35:59:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:00:INFO] Sniff delimiter as ',' [2020-05-05:15:36:00:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:00:INFO] Sniff delimiter as ',' [2020-05-05:15:36:00:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:01:INFO] Sniff delimiter as ',' [2020-05-05:15:36:01:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:01:INFO] Sniff delimiter as ',' [2020-05-05:15:36:01:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:03:INFO] Sniff delimiter as ',' [2020-05-05:15:36:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:03:INFO] Sniff delimiter as ',' [2020-05-05:15:36:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:03:INFO] Sniff delimiter as ',' [2020-05-05:15:36:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:03:INFO] Sniff delimiter as ',' [2020-05-05:15:36:03:INFO] Sniff delimiter as ',' [2020-05-05:15:36:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:03:INFO] Sniff delimiter as ',' [2020-05-05:15:36:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:03:INFO] Sniff delimiter as ',' [2020-05-05:15:36:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:03:INFO] Sniff delimiter as ',' [2020-05-05:15:36:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:05:INFO] Sniff delimiter as ',' [2020-05-05:15:36:05:INFO] Sniff delimiter as ',' [2020-05-05:15:36:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:05:INFO] Sniff delimiter as ',' [2020-05-05:15:36:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:05:INFO] Sniff delimiter as ',' [2020-05-05:15:36:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:05:INFO] Sniff delimiter as ',' [2020-05-05:15:36:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:05:INFO] Sniff delimiter as ',' [2020-05-05:15:36:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:05:INFO] Sniff delimiter as ',' [2020-05-05:15:36:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:05:INFO] Sniff delimiter as ',' [2020-05-05:15:36:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:08:INFO] Sniff delimiter as ',' [2020-05-05:15:36:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:08:INFO] Sniff delimiter as ',' [2020-05-05:15:36:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:08:INFO] Sniff delimiter as ',' [2020-05-05:15:36:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:08:INFO] Sniff delimiter as ',' [2020-05-05:15:36:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:08:INFO] Sniff delimiter as ',' [2020-05-05:15:36:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:08:INFO] Sniff delimiter as ',' [2020-05-05:15:36:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:08:INFO] Sniff delimiter as ',' [2020-05-05:15:36:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:08:INFO] Sniff delimiter as ',' [2020-05-05:15:36:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:10:INFO] Sniff delimiter as ',' [2020-05-05:15:36:10:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:10:INFO] Sniff delimiter as ',' [2020-05-05:15:36:10:INFO] Sniff delimiter as ',' [2020-05-05:15:36:10:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:10:INFO] Sniff delimiter as ',' [2020-05-05:15:36:10:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:10:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:10:INFO] Sniff delimiter as ',' [2020-05-05:15:36:10:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:10:INFO] Sniff delimiter as ',' [2020-05-05:15:36:10:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:10:INFO] Sniff delimiter as ',' [2020-05-05:15:36:10:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:10:INFO] Sniff delimiter as ',' [2020-05-05:15:36:10:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:13:INFO] Sniff delimiter as ',' [2020-05-05:15:36:13:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:13:INFO] Sniff delimiter as ',' [2020-05-05:15:36:13:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:13:INFO] Sniff delimiter as ',' [2020-05-05:15:36:13:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:13:INFO] Sniff delimiter as ',' [2020-05-05:15:36:13:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:15:INFO] Sniff delimiter as ',' [2020-05-05:15:36:15:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:15:INFO] Sniff delimiter as ',' [2020-05-05:15:36:15:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:15:INFO] Sniff delimiter as ',' [2020-05-05:15:36:15:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:15:INFO] Sniff delimiter as ',' [2020-05-05:15:36:15:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:15:INFO] Sniff delimiter as ',' [2020-05-05:15:36:15:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:15:INFO] Sniff delimiter as ',' [2020-05-05:15:36:15:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:15:INFO] Sniff delimiter as ',' [2020-05-05:15:36:15:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:15:INFO] Sniff delimiter as ',' [2020-05-05:15:36:15:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:17:INFO] Sniff delimiter as ',' [2020-05-05:15:36:17:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:18:INFO] Sniff delimiter as ',' [2020-05-05:15:36:18:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:18:INFO] Sniff delimiter as ',' [2020-05-05:15:36:18:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:18:INFO] Sniff delimiter as ',' [2020-05-05:15:36:18:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:17:INFO] Sniff delimiter as ',' [2020-05-05:15:36:17:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:18:INFO] Sniff delimiter as ',' [2020-05-05:15:36:18:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:18:INFO] Sniff delimiter as ',' [2020-05-05:15:36:18:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:18:INFO] Sniff delimiter as ',' [2020-05-05:15:36:18:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:20:INFO] Sniff delimiter as ',' [2020-05-05:15:36:20:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:20:INFO] Sniff delimiter as ',' [2020-05-05:15:36:20:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:20:INFO] Sniff delimiter as ',' [2020-05-05:15:36:20:INFO] Determined delimiter of CSV input is ',' [2020-05-05:15:36:20:INFO] Sniff delimiter as ',' [2020-05-05:15:36:20:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.0 KiB (2.6 MiB/s) with 1 file(s) remaining Completed 370.0 KiB/370.0 KiB (3.7 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-947078573071/xgboost-2020-05-05-15-31-32-632/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None vocabulary_size = 5000 vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv path = os.path.join(data_dir, "new_data.csv") pd.DataFrame(new_XV).to_csv(path, index=False, header=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None new_data_location = session.upload_data(os.path.join(data_dir, "new_data.csv"), key_prefix='new_data_location') ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output .....................Arguments: serve [2020-05-05 16:31:14 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-05 16:31:14 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-05 16:31:14 +0000] [1] [INFO] Using worker: gevent [2020-05-05 16:31:14 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-05 16:31:14 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-05:16:31:14:INFO] Model loaded successfully for worker : 38 [2020-05-05 16:31:14 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-05 16:31:14 +0000] [41] [INFO] Booting worker with pid: 41 [2020-05-05:16:31:14:INFO] Model loaded successfully for worker : 39 [2020-05-05:16:31:14:INFO] Model loaded successfully for worker : 40 [2020-05-05:16:31:14:INFO] Model loaded successfully for worker : 41 [2020-05-05:16:31:59:INFO] Sniff delimiter as ',' [2020-05-05:16:31:59:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:31:59:INFO] Sniff delimiter as ',' [2020-05-05:16:31:59:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:31:59:INFO] Sniff delimiter as ',' [2020-05-05:16:31:59:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:31:59:INFO] Sniff delimiter as ',' [2020-05-05:16:31:59:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:31:59:INFO] Sniff delimiter as ',' [2020-05-05:16:31:59:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:31:59:INFO] Sniff delimiter as ',' [2020-05-05:16:31:59:INFO] Determined delimiter of CSV input is ',' 2020-05-05T16:31:56.249:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-05:16:32:00:INFO] Sniff delimiter as ',' [2020-05-05:16:32:00:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:01:INFO] Sniff delimiter as ',' [2020-05-05:16:32:01:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:01:INFO] Sniff delimiter as ',' [2020-05-05:16:32:01:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:00:INFO] Sniff delimiter as ',' [2020-05-05:16:32:00:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:01:INFO] Sniff delimiter as ',' [2020-05-05:16:32:01:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:01:INFO] Sniff delimiter as ',' [2020-05-05:16:32:01:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:02:INFO] Sniff delimiter as ',' [2020-05-05:16:32:02:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:03:INFO] Sniff delimiter as ',' [2020-05-05:16:32:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:03:INFO] Sniff delimiter as ',' [2020-05-05:16:32:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:02:INFO] Sniff delimiter as ',' [2020-05-05:16:32:02:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:03:INFO] Sniff delimiter as ',' [2020-05-05:16:32:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:03:INFO] Sniff delimiter as ',' [2020-05-05:16:32:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:03:INFO] Sniff delimiter as ',' [2020-05-05:16:32:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:03:INFO] Sniff delimiter as ',' [2020-05-05:16:32:03:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:05:INFO] Sniff delimiter as ',' [2020-05-05:16:32:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:05:INFO] Sniff delimiter as ',' [2020-05-05:16:32:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:05:INFO] Sniff delimiter as ',' [2020-05-05:16:32:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:05:INFO] Sniff delimiter as ',' [2020-05-05:16:32:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:05:INFO] Sniff delimiter as ',' [2020-05-05:16:32:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:05:INFO] Sniff delimiter as ',' [2020-05-05:16:32:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:07:INFO] Sniff delimiter as ',' [2020-05-05:16:32:07:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:07:INFO] Sniff delimiter as ',' [2020-05-05:16:32:07:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:08:INFO] Sniff delimiter as ',' [2020-05-05:16:32:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:08:INFO] Sniff delimiter as ',' [2020-05-05:16:32:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:08:INFO] Sniff delimiter as ',' [2020-05-05:16:32:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:08:INFO] Sniff delimiter as ',' [2020-05-05:16:32:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:08:INFO] Sniff delimiter as ',' [2020-05-05:16:32:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:08:INFO] Sniff delimiter as ',' [2020-05-05:16:32:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:08:INFO] Sniff delimiter as ',' [2020-05-05:16:32:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:08:INFO] Sniff delimiter as ',' [2020-05-05:16:32:08:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:13:INFO] Sniff delimiter as ',' [2020-05-05:16:32:13:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:13:INFO] Sniff delimiter as ',' [2020-05-05:16:32:13:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:13:INFO] Sniff delimiter as ',' [2020-05-05:16:32:13:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:13:INFO] Sniff delimiter as ',' [2020-05-05:16:32:13:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:13:INFO] Sniff delimiter as ',' [2020-05-05:16:32:13:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:13:INFO] Sniff delimiter as ',' [2020-05-05:16:32:13:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:13:INFO] Sniff delimiter as ',' [2020-05-05:16:32:13:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:13:INFO] Sniff delimiter as ',' [2020-05-05:16:32:13:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:15:INFO] Sniff delimiter as ',' [2020-05-05:16:32:15:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:15:INFO] Sniff delimiter as ',' [2020-05-05:16:32:15:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:15:INFO] Sniff delimiter as ',' [2020-05-05:16:32:15:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:16:INFO] Sniff delimiter as ',' [2020-05-05:16:32:16:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:15:INFO] Sniff delimiter as ',' [2020-05-05:16:32:15:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:16:INFO] Sniff delimiter as ',' [2020-05-05:16:32:16:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:17:INFO] Sniff delimiter as ',' [2020-05-05:16:32:17:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:18:INFO] Sniff delimiter as ',' [2020-05-05:16:32:18:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:18:INFO] Sniff delimiter as ',' [2020-05-05:16:32:18:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:17:INFO] Sniff delimiter as ',' [2020-05-05:16:32:17:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:18:INFO] Sniff delimiter as ',' [2020-05-05:16:32:18:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:18:INFO] Sniff delimiter as ',' [2020-05-05:16:32:18:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:18:INFO] Sniff delimiter as ',' [2020-05-05:16:32:18:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:18:INFO] Sniff delimiter as ',' [2020-05-05:16:32:18:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:20:INFO] Sniff delimiter as ',' [2020-05-05:16:32:20:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:20:INFO] Sniff delimiter as ',' [2020-05-05:16:32:20:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:20:INFO] Sniff delimiter as ',' [2020-05-05:16:32:20:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:20:INFO] Sniff delimiter as ',' [2020-05-05:16:32:20:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:20:INFO] Sniff delimiter as ',' [2020-05-05:16:32:20:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:20:INFO] Sniff delimiter as ',' [2020-05-05:16:32:20:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:20:INFO] Sniff delimiter as ',' [2020-05-05:16:32:20:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:20:INFO] Sniff delimiter as ',' [2020-05-05:16:32:20:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:22:INFO] Sniff delimiter as ',' [2020-05-05:16:32:22:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:22:INFO] Sniff delimiter as ',' [2020-05-05:16:32:22:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:22:INFO] Sniff delimiter as ',' [2020-05-05:16:32:22:INFO] Determined delimiter of CSV input is ',' [2020-05-05:16:32:22:INFO] Sniff delimiter as ',' [2020-05-05:16:32:22:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown Ok, let's do it directly in terminal to avoid this issue ###Code xgb_transformer.output_path data_dir ###Output _____no_output_____ ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-05-05-15-20-12-183 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['releas', 'two', 'year', 'born', 'oscar', 'win', 'movi', 'lavish', 'technicolor', 'set', 'costum', 'breathtak', 'cinematographi', 'superb', 'wall', 'wall', 'gershwin', 'music', 'superior', 'choreographi', 'lighter', 'air', 'screenplay', 'great', 'perform', 'kelli', 'levant', 'foch', 'guetari', 'caron', 'hollywood', 'make', 'em', 'like', 'anymor', 'definit', 'favorit', 'movi', 'time', 'standard', 'judg', 'film', 'enjoy', 'enjoy', 'enjoy', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'21st', 'playboy', 'ghetto', 'weari', 'reincarn', 'victorian', 'spill'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'banana', 'omin', 'dubiou', 'sophi', 'orchestr', 'optimist', 'masterson'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. **My exploration** ###Code old_words = original_vocabulary - new_vocabulary print([(word, vocabulary.get(word)) for word in old_words]) new_words = new_vocabulary - original_vocabulary print([(word, new_vectorizer.vocabulary_.get(word)) for word in new_words]) ###Output [('banana', 424), ('omin', 3156), ('dubiou', 1426), ('sophi', 4144), ('orchestr', 3172), ('optimist', 3169), ('masterson', 2803)] ###Markdown Their indexes are quite high, so it means that their appearance frequency is small. The most frequent word we have here is in 67th position of frequency. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-02-08 17:50:08-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 10.7MB/s in 11s 2020-02-08 17:50:19 (7.23 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Wrote preprocessed data to cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-02-08 18:56:00 Starting - Starting the training job... 2020-02-08 18:56:03 Starting - Launching requested ML instances... 2020-02-08 18:56:59 Starting - Preparing the instances for training...... 2020-02-08 18:58:01 Downloading - Downloading input data... 2020-02-08 18:58:22 Training - Downloading the training image..Arguments: train [2020-02-08:18:58:42:INFO] Running standalone xgboost training. [2020-02-08:18:58:42:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8512.31mb [2020-02-08:18:58:42:INFO] Determined delimiter of CSV input is ',' [18:58:42] S3DistributionType set as FullyReplicated [18:58:44] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-02-08:18:58:44:INFO] Determined delimiter of CSV input is ',' [18:58:44] S3DistributionType set as FullyReplicated [18:58:45] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [18:58:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.298133#011validation-error:0.2983 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. 2020-02-08 18:58:41 Training - Training image download completed. Training in progress.[18:58:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 10 pruned nodes, max_depth=5 [1]#011train-error:0.283467#011validation-error:0.2811 [18:58:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.274933#011validation-error:0.2745 [18:58:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [3]#011train-error:0.265267#011validation-error:0.2674 [18:58:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.2588#011validation-error:0.263 [18:58:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [5]#011train-error:0.2442#011validation-error:0.25 [18:58:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.236#011validation-error:0.2435 [18:58:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [7]#011train-error:0.2318#011validation-error:0.2385 [18:58:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 0 pruned nodes, max_depth=5 [8]#011train-error:0.227267#011validation-error:0.2368 [18:59:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [9]#011train-error:0.2238#011validation-error:0.2336 [18:59:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.216#011validation-error:0.2267 [18:59:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [11]#011train-error:0.217667#011validation-error:0.2282 [18:59:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.2124#011validation-error:0.2238 [18:59:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.205933#011validation-error:0.2202 [18:59:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [14]#011train-error:0.2022#011validation-error:0.2149 [18:59:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [15]#011train-error:0.201933#011validation-error:0.2159 [18:59:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [16]#011train-error:0.197667#011validation-error:0.2134 [18:59:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [17]#011train-error:0.195333#011validation-error:0.2114 [18:59:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [18]#011train-error:0.1894#011validation-error:0.2073 [18:59:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.190333#011validation-error:0.2077 [18:59:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 0 pruned nodes, max_depth=5 [20]#011train-error:0.1872#011validation-error:0.2044 [18:59:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.184067#011validation-error:0.2022 [18:59:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.182#011validation-error:0.2004 [18:59:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.178333#011validation-error:0.1971 [18:59:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [24]#011train-error:0.174667#011validation-error:0.1949 [18:59:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [25]#011train-error:0.173933#011validation-error:0.1929 [18:59:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [26]#011train-error:0.172333#011validation-error:0.192 [18:59:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [27]#011train-error:0.1704#011validation-error:0.1913 [18:59:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [28]#011train-error:0.169267#011validation-error:0.1906 [18:59:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [29]#011train-error:0.167667#011validation-error:0.1899 [18:59:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 0 pruned nodes, max_depth=5 [30]#011train-error:0.165733#011validation-error:0.1889 [18:59:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.164667#011validation-error:0.1872 [18:59:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [32]#011train-error:0.161733#011validation-error:0.1867 [18:59:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 16 pruned nodes, max_depth=5 [33]#011train-error:0.16#011validation-error:0.1846 [18:59:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [34]#011train-error:0.158#011validation-error:0.1848 [18:59:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.1578#011validation-error:0.1817 [18:59:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.156867#011validation-error:0.181 [18:59:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [37]#011train-error:0.1556#011validation-error:0.1797 [18:59:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [38]#011train-error:0.155133#011validation-error:0.1807 [18:59:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.152733#011validation-error:0.1806 [18:59:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [40]#011train-error:0.152333#011validation-error:0.1798 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .....................Arguments: serve [2020-02-08 19:06:00 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-02-08 19:06:00 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-02-08 19:06:00 +0000] [1] [INFO] Using worker: gevent [2020-02-08 19:06:00 +0000] [38] [INFO] Booting worker with pid: 38 [2020-02-08 19:06:00 +0000] [39] [INFO] Booting worker with pid: 39 [2020-02-08 19:06:00 +0000] [40] [INFO] Booting worker with pid: 40 [2020-02-08 19:06:00 +0000] [41] [INFO] Booting worker with pid: 41 [2020-02-08:19:06:00:INFO] Model loaded successfully for worker : 38 [2020-02-08:19:06:00:INFO] Model loaded successfully for worker : 40 [2020-02-08:19:06:00:INFO] Model loaded successfully for worker : 39 [2020-02-08:19:06:00:INFO] Model loaded successfully for worker : 41 2020-02-08T19:06:20.368:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-02-08:19:06:23:INFO] Sniff delimiter as ',' [2020-02-08:19:06:23:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:23:INFO] Sniff delimiter as ',' [2020-02-08:19:06:23:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:23:INFO] Sniff delimiter as ',' [2020-02-08:19:06:23:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:23:INFO] Sniff delimiter as ',' [2020-02-08:19:06:23:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:23:INFO] Sniff delimiter as ',' [2020-02-08:19:06:23:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:23:INFO] Sniff delimiter as ',' [2020-02-08:19:06:23:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:23:INFO] Sniff delimiter as ',' [2020-02-08:19:06:23:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:23:INFO] Sniff delimiter as ',' [2020-02-08:19:06:23:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:25:INFO] Sniff delimiter as ',' [2020-02-08:19:06:25:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:26:INFO] Sniff delimiter as ',' [2020-02-08:19:06:26:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:26:INFO] Sniff delimiter as ',' [2020-02-08:19:06:26:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:26:INFO] Sniff delimiter as ',' [2020-02-08:19:06:26:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:25:INFO] Sniff delimiter as ',' [2020-02-08:19:06:25:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:26:INFO] Sniff delimiter as ',' [2020-02-08:19:06:26:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:26:INFO] Sniff delimiter as ',' [2020-02-08:19:06:26:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:26:INFO] Sniff delimiter as ',' [2020-02-08:19:06:26:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:28:INFO] Sniff delimiter as ',' [2020-02-08:19:06:28:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:28:INFO] Sniff delimiter as ',' [2020-02-08:19:06:28:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:28:INFO] Sniff delimiter as ',' [2020-02-08:19:06:28:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:28:INFO] Sniff delimiter as ',' [2020-02-08:19:06:28:INFO] Sniff delimiter as ',' [2020-02-08:19:06:28:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:28:INFO] Sniff delimiter as ',' [2020-02-08:19:06:28:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:28:INFO] Sniff delimiter as ',' [2020-02-08:19:06:28:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:28:INFO] Sniff delimiter as ',' [2020-02-08:19:06:28:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:28:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:30:INFO] Sniff delimiter as ',' [2020-02-08:19:06:30:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:30:INFO] Sniff delimiter as ',' [2020-02-08:19:06:30:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:30:INFO] Sniff delimiter as ',' [2020-02-08:19:06:30:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:31:INFO] Sniff delimiter as ',' [2020-02-08:19:06:31:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:31:INFO] Sniff delimiter as ',' [2020-02-08:19:06:31:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:30:INFO] Sniff delimiter as ',' [2020-02-08:19:06:30:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:31:INFO] Sniff delimiter as ',' [2020-02-08:19:06:31:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:31:INFO] Sniff delimiter as ',' [2020-02-08:19:06:31:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:33:INFO] Sniff delimiter as ',' [2020-02-08:19:06:33:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:33:INFO] Sniff delimiter as ',' [2020-02-08:19:06:33:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:35:INFO] Sniff delimiter as ',' [2020-02-08:19:06:35:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:35:INFO] Sniff delimiter as ',' [2020-02-08:19:06:35:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:35:INFO] Sniff delimiter as ',' [2020-02-08:19:06:35:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:35:INFO] Sniff delimiter as ',' [2020-02-08:19:06:35:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:35:INFO] Sniff delimiter as ',' [2020-02-08:19:06:35:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:36:INFO] Sniff delimiter as ',' [2020-02-08:19:06:36:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:35:INFO] Sniff delimiter as ',' [2020-02-08:19:06:35:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:36:INFO] Sniff delimiter as ',' [2020-02-08:19:06:36:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:38:INFO] Sniff delimiter as ',' [2020-02-08:19:06:38:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:38:INFO] Sniff delimiter as ',' [2020-02-08:19:06:38:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:38:INFO] Sniff delimiter as ',' [2020-02-08:19:06:38:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:38:INFO] Sniff delimiter as ',' [2020-02-08:19:06:38:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:38:INFO] Sniff delimiter as ',' [2020-02-08:19:06:38:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:38:INFO] Sniff delimiter as ',' [2020-02-08:19:06:38:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:38:INFO] Sniff delimiter as ',' [2020-02-08:19:06:38:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:38:INFO] Sniff delimiter as ',' [2020-02-08:19:06:38:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:40:INFO] Sniff delimiter as ',' [2020-02-08:19:06:40:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:40:INFO] Sniff delimiter as ',' [2020-02-08:19:06:40:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:40:INFO] Sniff delimiter as ',' [2020-02-08:19:06:40:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:40:INFO] Sniff delimiter as ',' [2020-02-08:19:06:40:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:40:INFO] Sniff delimiter as ',' [2020-02-08:19:06:40:INFO] Sniff delimiter as ',' [2020-02-08:19:06:40:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:41:INFO] Sniff delimiter as ',' [2020-02-08:19:06:41:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:40:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:06:41:INFO] Sniff delimiter as ',' [2020-02-08:19:06:41:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-ap-northeast-2-148514131281/xgboost-2020-02-08-19-02-45-728/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ...........Arguments: serve [2020-02-08 19:24:26 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-02-08 19:24:26 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-02-08 19:24:26 +0000] [1] [INFO] Using worker: gevent [2020-02-08 19:24:26 +0000] [38] [INFO] Booting worker with pid: 38 [2020-02-08 19:24:26 +0000] [39] [INFO] Booting worker with pid: 39 [2020-02-08 19:24:26 +0000] [40] [INFO] Booting worker with pid: 40 [2020-02-08 19:24:26 +0000] [41] [INFO] Booting worker with pid: 41 [2020-02-08:19:24:26:INFO] Model loaded successfully for worker : 38 [2020-02-08:19:24:26:INFO] Model loaded successfully for worker : 39 [2020-02-08:19:24:26:INFO] Model loaded successfully for worker : 40 [2020-02-08:19:24:26:INFO] Model loaded successfully for worker : 41 [2020-02-08:19:24:54:INFO] Sniff delimiter as ',' [2020-02-08:19:24:54:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:55:INFO] Sniff delimiter as ',' [2020-02-08:19:24:55:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:55:INFO] Sniff delimiter as ',' [2020-02-08:19:24:55:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:55:INFO] Sniff delimiter as ',' [2020-02-08:19:24:54:INFO] Sniff delimiter as ',' [2020-02-08:19:24:54:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:55:INFO] Sniff delimiter as ',' [2020-02-08:19:24:55:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:55:INFO] Sniff delimiter as ',' [2020-02-08:19:24:55:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:55:INFO] Sniff delimiter as ',' [2020-02-08:19:24:55:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:55:INFO] Determined delimiter of CSV input is ',' 2020-02-08T19:24:52.346:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-02-08:19:24:57:INFO] Sniff delimiter as ',' [2020-02-08:19:24:57:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:57:INFO] Sniff delimiter as ',' [2020-02-08:19:24:57:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:57:INFO] Sniff delimiter as ',' [2020-02-08:19:24:57:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:57:INFO] Sniff delimiter as ',' [2020-02-08:19:24:57:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:58:INFO] Sniff delimiter as ',' [2020-02-08:19:24:58:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:58:INFO] Sniff delimiter as ',' [2020-02-08:19:24:58:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:58:INFO] Sniff delimiter as ',' [2020-02-08:19:24:58:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:58:INFO] Sniff delimiter as ',' [2020-02-08:19:24:58:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:59:INFO] Sniff delimiter as ',' [2020-02-08:19:24:59:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:00:INFO] Sniff delimiter as ',' [2020-02-08:19:25:00:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:00:INFO] Sniff delimiter as ',' [2020-02-08:19:25:00:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:00:INFO] Sniff delimiter as ',' [2020-02-08:19:25:00:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:24:59:INFO] Sniff delimiter as ',' [2020-02-08:19:24:59:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:00:INFO] Sniff delimiter as ',' [2020-02-08:19:25:00:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:00:INFO] Sniff delimiter as ',' [2020-02-08:19:25:00:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:00:INFO] Sniff delimiter as ',' [2020-02-08:19:25:00:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:02:INFO] Sniff delimiter as ',' [2020-02-08:19:25:02:INFO] Sniff delimiter as ',' [2020-02-08:19:25:02:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:02:INFO] Sniff delimiter as ',' [2020-02-08:19:25:02:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:02:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:02:INFO] Sniff delimiter as ',' [2020-02-08:19:25:02:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:03:INFO] Sniff delimiter as ',' [2020-02-08:19:25:03:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:03:INFO] Sniff delimiter as ',' [2020-02-08:19:25:03:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:03:INFO] Sniff delimiter as ',' [2020-02-08:19:25:03:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:03:INFO] Sniff delimiter as ',' [2020-02-08:19:25:03:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:05:INFO] Sniff delimiter as ',' [2020-02-08:19:25:05:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:05:INFO] Sniff delimiter as ',' [2020-02-08:19:25:05:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:05:INFO] Sniff delimiter as ',' [2020-02-08:19:25:05:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:05:INFO] Sniff delimiter as ',' [2020-02-08:19:25:05:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:07:INFO] Sniff delimiter as ',' [2020-02-08:19:25:07:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:07:INFO] Sniff delimiter as ',' [2020-02-08:19:25:07:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:07:INFO] Sniff delimiter as ',' [2020-02-08:19:25:07:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:07:INFO] Sniff delimiter as ',' [2020-02-08:19:25:07:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:07:INFO] Sniff delimiter as ',' [2020-02-08:19:25:07:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:08:INFO] Sniff delimiter as ',' [2020-02-08:19:25:08:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:07:INFO] Sniff delimiter as ',' [2020-02-08:19:25:07:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:08:INFO] Sniff delimiter as ',' [2020-02-08:19:25:08:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:09:INFO] Sniff delimiter as ',' [2020-02-08:19:25:09:INFO] Sniff delimiter as ',' [2020-02-08:19:25:09:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:10:INFO] Sniff delimiter as ',' [2020-02-08:19:25:10:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:10:INFO] Sniff delimiter as ',' [2020-02-08:19:25:10:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:10:INFO] Sniff delimiter as ',' [2020-02-08:19:25:10:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:09:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:10:INFO] Sniff delimiter as ',' [2020-02-08:19:25:10:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:10:INFO] Sniff delimiter as ',' [2020-02-08:19:25:10:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:10:INFO] Sniff delimiter as ',' [2020-02-08:19:25:10:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:12:INFO] Sniff delimiter as ',' [2020-02-08:19:25:12:INFO] Sniff delimiter as ',' [2020-02-08:19:25:12:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:12:INFO] Sniff delimiter as ',' [2020-02-08:19:25:12:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:12:INFO] Sniff delimiter as ',' [2020-02-08:19:25:12:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:12:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:12:INFO] Sniff delimiter as ',' [2020-02-08:19:25:12:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:12:INFO] Sniff delimiter as ',' [2020-02-08:19:25:12:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:13:INFO] Sniff delimiter as ',' [2020-02-08:19:25:13:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:13:INFO] Sniff delimiter as ',' [2020-02-08:19:25:13:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:14:INFO] Sniff delimiter as ',' [2020-02-08:19:25:14:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:25:14:INFO] Sniff delimiter as ',' [2020-02-08:19:25:14:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.0 KiB (3.5 MiB/s) with 1 file(s) remaining Completed 370.0 KiB/370.0 KiB (4.9 MiB/s) with 1 file(s) remaining download: s3://sagemaker-ap-northeast-2-148514131281/xgboost-2020-02-08-19-21-24-197/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-02-08-18-56-00-821 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['othello', 'classic', 'shakespearen', 'stori', 'love', 'betray', 'lie', 'tragedi', 'rememb', 'studi', 'stori', 'high', 'school', 'actual', 'found', 'othello', 'probabl', 'favorit', 'shakespear', 'stori', 'due', 'fact', 'fascin', 'fact', 'shakespear', 'captur', 'feel', 'friendship', 'love', 'racism', 'perfectli', 'mean', 'realli', 'studi', 'stori', 'could', 'go', 'mani', 'philosophi', 'othello', 'went', 'insan', 'jealousi', 'blink', 'eye', 'later', 'report', 'also', 'watch', 'version', 'othello', 'say', 'absolut', 'brilliant', 'lawer', 'kenneth', 'captur', 'stori', 'well', 'understood', 'dark', 'othello', 'big', 'time', 'soldier', 'citi', 'love', 'everyon', 'includ', 'king', 'king', 'find', 'othello', 'snuck', 'daughter', 'desdemona', 'king', 'infuri', 'except', 'othello', 'welcom', 'citi', 'make', 'best', 'friend', 'cassio', 'side', 'man', 'instead', 'iago', 'stood', 'othello', 'due', 'insan', 'jealousi', 'reveng', 'still', 'pretend', 'othello', 'best', 'friend', 'mearli', 'hint', 'othello', 'desdemona', 'cheat', 'cassio', 'never', 'say', 'make', 'othello', 'think', 'happen', 'othello', 'driven', 'insan', 'pleasant', 'plan', 'desdemona', 'cassio', 'iago', 'happi', 'help', 'othello', 'incred', 'stori', 'highli', 'recommend', 'read', 'incred', 'stori', 'keep', 'think', 'read', 'othello', 'movi', 'also', 'great', 'recommend', 'captur', 'stori', 'perfectli', 'big', 'tearjerk', 'type', 'feel', 'could', 'utter', 'shock', 'happen', 'othello', 'desdemona', 'quickli', 'believ', 'true', 'love', 'would', 'betray', 'terrif', 'movi', 'great', 'act', 'good', 'set', 'good', 'direct', 'shakespear', 'meant', 'wrote', 'stori', '10', '10', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'reincarn', 'weari', 'spill', 'playboy', 'victorian', '21st', 'ghetto'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'orchestr', 'sophi', 'optimist', 'omin', 'dubiou', 'masterson', 'banana'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-02-08 19:38:04 Starting - Starting the training job... 2020-02-08 19:38:05 Starting - Launching requested ML instances... 2020-02-08 19:39:02 Starting - Preparing the instances for training...... 2020-02-08 19:40:03 Downloading - Downloading input data 2020-02-08 19:40:03 Training - Downloading the training image... 2020-02-08 19:40:25 Training - Training image download completed. Training in progress..Arguments: train [2020-02-08:19:40:26:INFO] Running standalone xgboost training. [2020-02-08:19:40:26:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8511.34mb [2020-02-08:19:40:26:INFO] Determined delimiter of CSV input is ',' [19:40:26] S3DistributionType set as FullyReplicated [19:40:27] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-02-08:19:40:27:INFO] Determined delimiter of CSV input is ',' [19:40:27] S3DistributionType set as FullyReplicated [19:40:28] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [19:40:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 2 pruned nodes, max_depth=5 [0]#011train-error:0.306#011validation-error:0.3055 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [19:40:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 8 pruned nodes, max_depth=5 [1]#011train-error:0.2944#011validation-error:0.2919 [19:40:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.284333#011validation-error:0.2819 [19:40:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.269933#011validation-error:0.271 [19:40:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.259867#011validation-error:0.2631 [19:40:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.260867#011validation-error:0.2669 [19:40:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [6]#011train-error:0.254467#011validation-error:0.2612 [19:40:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [7]#011train-error:0.245933#011validation-error:0.2522 [19:40:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.243533#011validation-error:0.2523 [19:40:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.232867#011validation-error:0.2437 [19:40:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.2252#011validation-error:0.2337 [19:40:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [11]#011train-error:0.2226#011validation-error:0.2318 [19:40:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [12]#011train-error:0.218867#011validation-error:0.2283 [19:40:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.214#011validation-error:0.2267 [19:40:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [14]#011train-error:0.206733#011validation-error:0.2234 [19:40:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [15]#011train-error:0.203467#011validation-error:0.2201 [19:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [16]#011train-error:0.201933#011validation-error:0.2176 [19:40:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [17]#011train-error:0.201067#011validation-error:0.2145 [19:40:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [18]#011train-error:0.1982#011validation-error:0.213 [19:40:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [19]#011train-error:0.194667#011validation-error:0.2141 [19:40:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [20]#011train-error:0.192733#011validation-error:0.2116 [19:40:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.191867#011validation-error:0.2108 [19:41:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [22]#011train-error:0.1904#011validation-error:0.2125 [19:41:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.1898#011validation-error:0.208 [19:41:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [24]#011train-error:0.1868#011validation-error:0.2077 [19:41:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [25]#011train-error:0.184867#011validation-error:0.2053 [19:41:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [26]#011train-error:0.183667#011validation-error:0.2047 [19:41:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [27]#011train-error:0.182133#011validation-error:0.2024 [19:41:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [28]#011train-error:0.1816#011validation-error:0.2017 [19:41:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [29]#011train-error:0.178867#011validation-error:0.2004 [19:41:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.177867#011validation-error:0.1988 [19:41:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.1764#011validation-error:0.1976 [19:41:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [32]#011train-error:0.175067#011validation-error:0.1976 [19:41:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.1738#011validation-error:0.1975 [19:41:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [34]#011train-error:0.1714#011validation-error:0.1968 [19:41:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 16 pruned nodes, max_depth=5 [35]#011train-error:0.1708#011validation-error:0.1972 [19:41:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.171#011validation-error:0.1967 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ....................Arguments: serve [2020-02-08 19:46:32 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-02-08 19:46:32 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-02-08 19:46:32 +0000] [1] [INFO] Using worker: gevent [2020-02-08 19:46:32 +0000] [38] [INFO] Booting worker with pid: 38 [2020-02-08 19:46:32 +0000] [39] [INFO] Booting worker with pid: 39 Arguments: serve [2020-02-08 19:46:32 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-02-08 19:46:32 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-02-08 19:46:32 +0000] [1] [INFO] Using worker: gevent [2020-02-08 19:46:32 +0000] [38] [INFO] Booting worker with pid: 38 [2020-02-08 19:46:32 +0000] [39] [INFO] Booting worker with pid: 39 [2020-02-08 19:46:32 +0000] [40] [INFO] Booting worker with pid: 40 [2020-02-08 19:46:32 +0000] [41] [INFO] Booting worker with pid: 41 [2020-02-08 19:46:32 +0000] [40] [INFO] Booting worker with pid: 40 [2020-02-08 19:46:32 +0000] [41] [INFO] Booting worker with pid: 41 [2020-02-08:19:46:32:INFO] Model loaded successfully for worker : 38 [2020-02-08:19:46:32:INFO] Model loaded successfully for worker : 41 [2020-02-08:19:46:32:INFO] Model loaded successfully for worker : 39 [2020-02-08:19:46:32:INFO] Model loaded successfully for worker : 40 [2020-02-08:19:46:32:INFO] Model loaded successfully for worker : 38 [2020-02-08:19:46:32:INFO] Model loaded successfully for worker : 41 [2020-02-08:19:46:32:INFO] Model loaded successfully for worker : 39 [2020-02-08:19:46:32:INFO] Model loaded successfully for worker : 40 2020-02-08T19:46:41.210:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-02-08:19:46:44:INFO] Sniff delimiter as ',' [2020-02-08:19:46:44:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:44:INFO] Sniff delimiter as ',' [2020-02-08:19:46:44:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:44:INFO] Sniff delimiter as ',' [2020-02-08:19:46:44:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:44:INFO] Sniff delimiter as ',' [2020-02-08:19:46:44:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:46:INFO] Sniff delimiter as ',' [2020-02-08:19:46:46:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:46:INFO] Sniff delimiter as ',' [2020-02-08:19:46:46:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:47:INFO] Sniff delimiter as ',' [2020-02-08:19:46:47:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:47:INFO] Sniff delimiter as ',' [2020-02-08:19:46:47:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:46:INFO] Sniff delimiter as ',' [2020-02-08:19:46:46:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:46:INFO] Sniff delimiter as ',' [2020-02-08:19:46:46:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:47:INFO] Sniff delimiter as ',' [2020-02-08:19:46:47:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:47:INFO] Sniff delimiter as ',' [2020-02-08:19:46:47:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:49:INFO] Sniff delimiter as ',' [2020-02-08:19:46:49:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:49:INFO] Sniff delimiter as ',' [2020-02-08:19:46:49:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:49:INFO] Sniff delimiter as ',' [2020-02-08:19:46:49:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:49:INFO] Sniff delimiter as ',' [2020-02-08:19:46:49:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:49:INFO] Sniff delimiter as ',' [2020-02-08:19:46:49:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:49:INFO] Sniff delimiter as ',' [2020-02-08:19:46:49:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:49:INFO] Sniff delimiter as ',' [2020-02-08:19:46:49:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:49:INFO] Sniff delimiter as ',' [2020-02-08:19:46:49:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:51:INFO] Sniff delimiter as ',' [2020-02-08:19:46:51:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:51:INFO] Sniff delimiter as ',' [2020-02-08:19:46:51:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:51:INFO] Sniff delimiter as ',' [2020-02-08:19:46:51:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:52:INFO] Sniff delimiter as ',' [2020-02-08:19:46:52:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:51:INFO] Sniff delimiter as ',' [2020-02-08:19:46:51:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:51:INFO] Sniff delimiter as ',' [2020-02-08:19:46:51:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:51:INFO] Sniff delimiter as ',' [2020-02-08:19:46:51:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:52:INFO] Sniff delimiter as ',' [2020-02-08:19:46:52:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:56:INFO] Sniff delimiter as ',' [2020-02-08:19:46:56:INFO] Sniff delimiter as ',' [2020-02-08:19:46:56:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:56:INFO] Sniff delimiter as ',' [2020-02-08:19:46:56:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:56:INFO] Sniff delimiter as ',' [2020-02-08:19:46:56:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:56:INFO] Sniff delimiter as ',' [2020-02-08:19:46:56:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:56:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:56:INFO] Sniff delimiter as ',' [2020-02-08:19:46:56:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:56:INFO] Sniff delimiter as ',' [2020-02-08:19:46:56:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:56:INFO] Sniff delimiter as ',' [2020-02-08:19:46:56:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:58:INFO] Sniff delimiter as ',' [2020-02-08:19:46:58:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:59:INFO] Sniff delimiter as ',' [2020-02-08:19:46:59:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:59:INFO] Sniff delimiter as ',' [2020-02-08:19:46:59:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:58:INFO] Sniff delimiter as ',' [2020-02-08:19:46:58:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:59:INFO] Sniff delimiter as ',' [2020-02-08:19:46:59:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:59:INFO] Sniff delimiter as ',' [2020-02-08:19:46:59:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:59:INFO] Sniff delimiter as ',' [2020-02-08:19:46:59:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:46:59:INFO] Sniff delimiter as ',' [2020-02-08:19:46:59:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:47:01:INFO] Sniff delimiter as ',' [2020-02-08:19:47:01:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:47:01:INFO] Sniff delimiter as ',' [2020-02-08:19:47:01:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:47:01:INFO] Sniff delimiter as ',' [2020-02-08:19:47:01:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:47:01:INFO] Sniff delimiter as ',' [2020-02-08:19:47:01:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:47:01:INFO] Sniff delimiter as ',' [2020-02-08:19:47:01:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:47:01:INFO] Sniff delimiter as ',' [2020-02-08:19:47:01:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:47:01:INFO] Sniff delimiter as ',' [2020-02-08:19:47:01:INFO] Determined delimiter of CSV input is ',' [2020-02-08:19:47:01:INFO] Sniff delimiter as ',' [2020-02-08:19:47:01:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.3 KiB (3.8 MiB/s) with 1 file(s) remaining Completed 366.3 KiB/366.3 KiB (5.4 MiB/s) with 1 file(s) remaining download: s3://sagemaker-ap-northeast-2-148514131281/xgboost-2020-02-08-19-43-17-347/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code model_name = new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-analysis-xgboost-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": model_name, "VariantName": "AllTraffic" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code xgb_predictor.endpoint # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint( EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name ) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-04-12 21:16:58-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 16.7MB/s in 8.4s 2020-04-12 21:17:07 (9.54 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m5.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-04-12 21:28:17 Starting - Starting the training job... 2020-04-12 21:28:18 Starting - Launching requested ML instances...... 2020-04-12 21:29:18 Starting - Preparing the instances for training... 2020-04-12 21:30:10 Downloading - Downloading input data 2020-04-12 21:30:10 Training - Downloading the training image..Arguments: train [2020-04-12:21:30:24:INFO] Running standalone xgboost training. [2020-04-12:21:30:24:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8162.64mb [2020-04-12:21:30:24:INFO] Determined delimiter of CSV input is ',' [21:30:24] S3DistributionType set as FullyReplicated [21:30:26] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-04-12:21:30:26:INFO] Determined delimiter of CSV input is ',' [21:30:26] S3DistributionType set as FullyReplicated [21:30:27] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [21:30:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [0]#011train-error:0.298333#011validation-error:0.2952 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [21:30:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [1]#011train-error:0.2834#011validation-error:0.2792 [21:30:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [2]#011train-error:0.282133#011validation-error:0.2793 [21:30:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 0 pruned nodes, max_depth=5 [3]#011train-error:0.267133#011validation-error:0.2688 2020-04-12 21:30:24 Training - Training image download completed. Training in progress.[21:30:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [4]#011train-error:0.269533#011validation-error:0.2702 [21:30:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.257933#011validation-error:0.2609 [21:30:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.2444#011validation-error:0.2524 [21:30:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.241467#011validation-error:0.246 [21:30:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.231133#011validation-error:0.2379 [21:30:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [9]#011train-error:0.226067#011validation-error:0.2325 [21:30:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [10]#011train-error:0.2226#011validation-error:0.2298 [21:30:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [11]#011train-error:0.2184#011validation-error:0.2251 [21:30:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [12]#011train-error:0.210933#011validation-error:0.2192 [21:30:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.207467#011validation-error:0.2171 [21:30:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [14]#011train-error:0.203133#011validation-error:0.2134 [21:30:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [15]#011train-error:0.199#011validation-error:0.2098 [21:30:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.197#011validation-error:0.2069 [21:30:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 12 pruned nodes, max_depth=5 [17]#011train-error:0.192467#011validation-error:0.2062 [21:30:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [18]#011train-error:0.189467#011validation-error:0.2041 [21:30:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [19]#011train-error:0.186733#011validation-error:0.2022 [21:30:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 0 pruned nodes, max_depth=5 [20]#011train-error:0.1826#011validation-error:0.199 [21:30:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.182133#011validation-error:0.1971 [21:30:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [22]#011train-error:0.1786#011validation-error:0.1954 [21:31:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [23]#011train-error:0.175867#011validation-error:0.1949 [21:31:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [24]#011train-error:0.173533#011validation-error:0.1935 [21:31:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.172#011validation-error:0.193 [21:31:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [26]#011train-error:0.169467#011validation-error:0.1892 [21:31:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [27]#011train-error:0.166533#011validation-error:0.1882 [21:31:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [28]#011train-error:0.166133#011validation-error:0.1869 [21:31:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5 [29]#011train-error:0.1638#011validation-error:0.1855 [21:31:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.162067#011validation-error:0.1844 [21:31:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.161#011validation-error:0.1835 [21:31:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [32]#011train-error:0.159267#011validation-error:0.1823 [21:31:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.158267#011validation-error:0.1816 [21:31:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [34]#011train-error:0.157267#011validation-error:0.1817 [21:31:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.156667#011validation-error:0.1808 [21:31:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [36]#011train-error:0.156267#011validation-error:0.1782 [21:31:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [37]#011train-error:0.155533#011validation-error:0.1781 [21:31:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [38]#011train-error:0.153667#011validation-error:0.1774 [21:31:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.152067#011validation-error:0.1757 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m5.large') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ................Arguments: serve [2020-04-12 21:36:57 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-04-12 21:36:57 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-04-12 21:36:57 +0000] [1] [INFO] Using worker: gevent [2020-04-12 21:36:57 +0000] [38] [INFO] Booting worker with pid: 38 [2020-04-12 21:36:57 +0000] [39] [INFO] Booting worker with pid: 39 [2020-04-12:21:36:57:INFO] Model loaded successfully for worker : 38 [2020-04-12:21:36:57:INFO] Model loaded successfully for worker : 39 2020-04-12T21:37:16.909:[sagemaker logs]: MaxConcurrentTransforms=2, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-04-12:21:37:20:INFO] Sniff delimiter as ',' [2020-04-12:21:37:20:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:20:INFO] Sniff delimiter as ',' [2020-04-12:21:37:20:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:21:INFO] Sniff delimiter as ',' [2020-04-12:21:37:21:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:21:INFO] Sniff delimiter as ',' [2020-04-12:21:37:21:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:23:INFO] Sniff delimiter as ',' [2020-04-12:21:37:23:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:23:INFO] Sniff delimiter as ',' [2020-04-12:21:37:23:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:23:INFO] Sniff delimiter as ',' [2020-04-12:21:37:23:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:23:INFO] Sniff delimiter as ',' [2020-04-12:21:37:23:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:25:INFO] Sniff delimiter as ',' [2020-04-12:21:37:25:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:25:INFO] Sniff delimiter as ',' [2020-04-12:21:37:25:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:25:INFO] Sniff delimiter as ',' [2020-04-12:21:37:25:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:25:INFO] Sniff delimiter as ',' [2020-04-12:21:37:25:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:29:INFO] Sniff delimiter as ',' [2020-04-12:21:37:29:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:29:INFO] Sniff delimiter as ',' [2020-04-12:21:37:29:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:29:INFO] Sniff delimiter as ',' [2020-04-12:21:37:29:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:29:INFO] Sniff delimiter as ',' [2020-04-12:21:37:29:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:31:INFO] Sniff delimiter as ',' [2020-04-12:21:37:31:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:31:INFO] Sniff delimiter as ',' [2020-04-12:21:37:31:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:31:INFO] Sniff delimiter as ',' [2020-04-12:21:37:31:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:31:INFO] Sniff delimiter as ',' [2020-04-12:21:37:31:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:33:INFO] Sniff delimiter as ',' [2020-04-12:21:37:33:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:33:INFO] Sniff delimiter as ',' [2020-04-12:21:37:33:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:33:INFO] Sniff delimiter as ',' [2020-04-12:21:37:33:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:33:INFO] Sniff delimiter as ',' [2020-04-12:21:37:33:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:35:INFO] Sniff delimiter as ',' [2020-04-12:21:37:35:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:35:INFO] Sniff delimiter as ',' [2020-04-12:21:37:35:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:35:INFO] Sniff delimiter as ',' [2020-04-12:21:37:35:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:35:INFO] Sniff delimiter as ',' [2020-04-12:21:37:35:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:37:INFO] Sniff delimiter as ',' [2020-04-12:21:37:37:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:37:INFO] Sniff delimiter as ',' [2020-04-12:21:37:37:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:37:INFO] Sniff delimiter as ',' [2020-04-12:21:37:37:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:37:INFO] Sniff delimiter as ',' [2020-04-12:21:37:37:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:41:INFO] Sniff delimiter as ',' [2020-04-12:21:37:41:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:41:INFO] Sniff delimiter as ',' [2020-04-12:21:37:41:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:41:INFO] Sniff delimiter as ',' [2020-04-12:21:37:41:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:41:INFO] Sniff delimiter as ',' [2020-04-12:21:37:41:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:43:INFO] Sniff delimiter as ',' [2020-04-12:21:37:43:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:43:INFO] Sniff delimiter as ',' [2020-04-12:21:37:43:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:43:INFO] Sniff delimiter as ',' [2020-04-12:21:37:43:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:43:INFO] Sniff delimiter as ',' [2020-04-12:21:37:43:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:45:INFO] Sniff delimiter as ',' [2020-04-12:21:37:45:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:45:INFO] Sniff delimiter as ',' [2020-04-12:21:37:45:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:45:INFO] Sniff delimiter as ',' [2020-04-12:21:37:45:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:45:INFO] Sniff delimiter as ',' [2020-04-12:21:37:45:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:47:INFO] Sniff delimiter as ',' [2020-04-12:21:37:47:INFO] Sniff delimiter as ',' [2020-04-12:21:37:47:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:47:INFO] Sniff delimiter as ',' [2020-04-12:21:37:47:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:47:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:47:INFO] Sniff delimiter as ',' [2020-04-12:21:37:47:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:49:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:49:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:51:INFO] Sniff delimiter as ',' [2020-04-12:21:37:51:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:52:INFO] Sniff delimiter as ',' [2020-04-12:21:37:52:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:51:INFO] Sniff delimiter as ',' [2020-04-12:21:37:51:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:52:INFO] Sniff delimiter as ',' [2020-04-12:21:37:52:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:53:INFO] Sniff delimiter as ',' [2020-04-12:21:37:53:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:54:INFO] Sniff delimiter as ',' [2020-04-12:21:37:54:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:53:INFO] Sniff delimiter as ',' [2020-04-12:21:37:53:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:54:INFO] Sniff delimiter as ',' [2020-04-12:21:37:54:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:55:INFO] Sniff delimiter as ',' [2020-04-12:21:37:55:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:56:INFO] Sniff delimiter as ',' [2020-04-12:21:37:56:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:55:INFO] Sniff delimiter as ',' [2020-04-12:21:37:55:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:56:INFO] Sniff delimiter as ',' [2020-04-12:21:37:56:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:57:INFO] Sniff delimiter as ',' [2020-04-12:21:37:57:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:57:INFO] Sniff delimiter as ',' [2020-04-12:21:37:57:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:57:INFO] Sniff delimiter as ',' [2020-04-12:21:37:57:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:57:INFO] Sniff delimiter as ',' [2020-04-12:21:37:57:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:59:INFO] Sniff delimiter as ',' [2020-04-12:21:37:59:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:38:00:INFO] Sniff delimiter as ',' [2020-04-12:21:38:00:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:37:59:INFO] Sniff delimiter as ',' [2020-04-12:21:37:59:INFO] Determined delimiter of CSV input is ',' [2020-04-12:21:38:00:INFO] Sniff delimiter as ',' [2020-04-12:21:38:00:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.0 KiB (3.8 MiB/s) with 1 file(s) remaining Completed 370.0 KiB/370.0 KiB (5.4 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-west-1-848439228145/xgboost-2020-04-12-21-34-31-266/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.fit_transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir,'new_data.csv'), header = False, index = False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir,'new_data.csv'),key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, data_type='S3Prefix', content_type = 'text/csv', split_type = 'Line', wait = True) ###Output ...........................................! ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/365.0 KiB (3.4 MiB/s) with 1 file(s) remaining Completed 365.0 KiB/365.0 KiB (4.7 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-west-1-848439228145/xgboost-2020-04-12-21-40-35-880/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-04-12-21-28-17-491 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['film', 'special', 'effect', 'time', 'impress', 'easili', 'explain', 'scene', 'play', 'backward', 'overlay', 'move', 'imag', 'object', 'film', 'surprisingli', 'well', 'done', 'given', 'film', 'made', '94', 'year', 'ago', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'playboy', 'ghetto', 'victorian', '21st', 'weari', 'reincarn', 'spill'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'optimist', 'masterson', 'dubiou', 'banana', 'omin', 'sophi', 'orchestr'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. ###Code vocabulary.get('21st') #temp_vocabulary = dict(filter(lambda elem: elem.value() in {'21st', 'spill', 'playboy', 'victorian', 'reincarn', 'ghetto', 'weari'},vocabulary.items())) vocabulary_lose = { key:value for (key,value) in vocabulary.items() if key in original_vocabulary - new_vocabulary} print("loose words with weight: ",vocabulary_lose) vocabulary_gain = { key:value for (key,value) in new_vectorizer.vocabulary_.items() if key in new_vocabulary-original_vocabulary} print("gained words with weight: ",vocabulary_gain) print("Max weight on vocabulary: ",vocabulary.get(max(vocabulary, key=vocabulary.get))) print("MAx weight on new vocabulary: ",new_vectorizer.vocabulary_.get(max(vocabulary, key=vocabulary.get))) ###Output Max weight on vocabulary: 4999 MAx weight on new vocabulary: 4999 ###Markdown Victorian, weari, reincarnation and playboy have a high weight and don't appear and sophi, optimist, omin and estr have also a high weight and have not been seen by the model in the fitting stage. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix = prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix = prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix = prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(role = role, train_instance_count = 1, train_instance_type = 'ml.m5.xlarge', train_max_run = 1000, sagemaker_session = session, image_name = container, output_path = 's3://{}/{}/output'.format(session.default_bucket(), prefix)) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. # And then set the algorithm specific parameters. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit(inputs = {'train': s3_new_input_train, 'validation': s3_new_input_validation}, wait = True) ###Output 2020-04-12 22:03:44 Starting - Starting the training job... 2020-04-12 22:03:45 Starting - Launching requested ML instances... 2020-04-12 22:04:43 Starting - Preparing the instances for training......... 2020-04-12 22:06:06 Downloading - Downloading input data... 2020-04-12 22:06:16 Training - Downloading the training imageArguments: train [2020-04-12:22:06:30:INFO] Running standalone xgboost training. [2020-04-12:22:06:30:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 7996.52mb [2020-04-12:22:06:30:INFO] Determined delimiter of CSV input is ',' [22:06:30] S3DistributionType set as FullyReplicated [22:06:31] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-04-12:22:06:31:INFO] Determined delimiter of CSV input is ',' [22:06:31] S3DistributionType set as FullyReplicated [22:06:33] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [22:06:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 8 pruned nodes, max_depth=5 [0]#011train-error:0.316067#011validation-error:0.3117 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [22:06:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [1]#011train-error:0.3002#011validation-error:0.2972 [22:06:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.288933#011validation-error:0.2861 [22:06:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.2744#011validation-error:0.2721 [22:06:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.2682#011validation-error:0.2643 [22:06:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.2578#011validation-error:0.2571 [22:06:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [6]#011train-error:0.249533#011validation-error:0.2521 [22:06:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.245267#011validation-error:0.2492 [22:06:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.239933#011validation-error:0.2456 [22:06:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [9]#011train-error:0.235867#011validation-error:0.2418 [22:06:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.229867#011validation-error:0.2366 [22:06:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [11]#011train-error:0.2216#011validation-error:0.231 [22:06:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [12]#011train-error:0.215#011validation-error:0.2245 [22:06:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [13]#011train-error:0.211133#011validation-error:0.2226 [22:06:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.209733#011validation-error:0.2213 [22:06:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5 [15]#011train-error:0.204533#011validation-error:0.2195 [22:06:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [16]#011train-error:0.2018#011validation-error:0.2167 [22:06:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [17]#011train-error:0.201333#011validation-error:0.2154 [22:06:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.200133#011validation-error:0.2134 [22:07:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [19]#011train-error:0.197867#011validation-error:0.2114 2020-04-12 22:06:45 Training - Training image download completed. Training in progress.[22:07:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [20]#011train-error:0.1952#011validation-error:0.2106 [22:07:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.192533#011validation-error:0.2082 [22:07:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.191467#011validation-error:0.2056 [22:07:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.189467#011validation-error:0.2044 [22:07:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [24]#011train-error:0.185933#011validation-error:0.201 [22:07:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.183867#011validation-error:0.1992 [22:07:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [26]#011train-error:0.183267#011validation-error:0.1986 [22:07:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [27]#011train-error:0.181467#011validation-error:0.1967 [22:07:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [28]#011train-error:0.1812#011validation-error:0.198 [22:07:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.180867#011validation-error:0.1968 [22:07:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.178133#011validation-error:0.1958 [22:07:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.176933#011validation-error:0.1942 [22:07:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.175333#011validation-error:0.1927 [22:07:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [33]#011train-error:0.175067#011validation-error:0.1921 [22:07:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.1746#011validation-error:0.1934 [22:07:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [35]#011train-error:0.1736#011validation-error:0.193 [22:07:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [36]#011train-error:0.1728#011validation-error:0.1925 [22:07:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [37]#011train-error:0.170933#011validation-error:0.1932 [22:07:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5 [38]#011train-error:0.169#011validation-error:0.1923 [22:07:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [39]#011train-error:0.169333#011validation-error:0.1919 [22:07:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [40]#011train-error:0.167533#011validation-error:0.1892 [22:07:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 16 pruned nodes, max_depth=5 [41]#011train-error:0.167133#011validation-error:0.1897 [22:07:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5 [42]#011train-error:0.1646#011validation-error:0.1899 [22:07:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [43]#011train-error:0.162067#011validation-error:0.1883 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model #new_xgb_transformer = sagemaker.transformer.Transformer(sagemaker_session = session) new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m5.large', role = role) ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location,data_type='S3Prefix', content_type = 'text/csv', split_type='Line', wait = True) #, content_type = 'text/csv' ###Output ................................................! ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/365.4 KiB (4.3 MiB/s) with 1 file(s) remaining Completed 365.4 KiB/365.4 KiB (6.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-west-1-848439228145/xgboost-2020-04-12-22-08-26-871/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime from datetime import datetime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = 'UpdateSentimentAnalysis'+datetime.now().strftime("%m-%d-%Y-%H-%M-%S") # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants=[{ 'InstanceType':'ml.m4.xlarge', 'InitialVariantWeight':1, 'InitialInstanceCount':1, 'ModelName':new_xgb_transformer.model_name, 'VariantName':'AllTraffic'}]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint( EndpointName = xgb_predictor.endpoint, EndpointConfigName =new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output -------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output --2020-03-13 04:48:18-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 24.1MB/s in 4.4s 2020-03-13 04:48:23 (18.1 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Wrote preprocessed data to cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-03-13 05:30:59 Starting - Starting the training job... 2020-03-13 05:31:01 Starting - Launching requested ML instances...... 2020-03-13 05:32:07 Starting - Preparing the instances for training...... 2020-03-13 05:33:02 Downloading - Downloading input data... 2020-03-13 05:33:55 Training - Training image download completed. Training in progress..Arguments: train [2020-03-13:05:33:56:INFO] Running standalone xgboost training. [2020-03-13:05:33:56:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8507.5mb [2020-03-13:05:33:56:INFO] Determined delimiter of CSV input is ',' [05:33:56] S3DistributionType set as FullyReplicated [05:33:57] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-03-13:05:33:57:INFO] Determined delimiter of CSV input is ',' [05:33:57] S3DistributionType set as FullyReplicated [05:33:59] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [05:34:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [0]#011train-error:0.298933#011validation-error:0.3023 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [05:34:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 0 pruned nodes, max_depth=5 [1]#011train-error:0.2822#011validation-error:0.2891 [05:34:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [2]#011train-error:0.279667#011validation-error:0.2847 [05:34:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.271467#011validation-error:0.2753 [05:34:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.261667#011validation-error:0.2693 [05:34:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.2528#011validation-error:0.2611 [05:34:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.246333#011validation-error:0.2549 [05:34:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [7]#011train-error:0.237733#011validation-error:0.2473 [05:34:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.227067#011validation-error:0.2389 [05:34:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.2236#011validation-error:0.2343 [05:34:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [10]#011train-error:0.217267#011validation-error:0.2247 [05:34:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [11]#011train-error:0.216267#011validation-error:0.2235 [05:34:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [12]#011train-error:0.2102#011validation-error:0.219 [05:34:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [13]#011train-error:0.208#011validation-error:0.2166 [05:34:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.206467#011validation-error:0.216 [05:34:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5 [15]#011train-error:0.201867#011validation-error:0.2146 [05:34:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.198067#011validation-error:0.2106 [05:34:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [17]#011train-error:0.195067#011validation-error:0.2085 [05:34:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [18]#011train-error:0.191133#011validation-error:0.2049 [05:34:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.1872#011validation-error:0.2032 [05:34:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [20]#011train-error:0.182533#011validation-error:0.1992 [05:34:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.1806#011validation-error:0.1984 [05:34:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [22]#011train-error:0.177533#011validation-error:0.1974 [05:34:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [23]#011train-error:0.1758#011validation-error:0.1961 [05:34:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [24]#011train-error:0.173733#011validation-error:0.1938 [05:34:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [25]#011train-error:0.1728#011validation-error:0.195 [05:34:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [26]#011train-error:0.1716#011validation-error:0.1932 [05:34:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [27]#011train-error:0.168533#011validation-error:0.1914 [05:34:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.166#011validation-error:0.1905 [05:34:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [29]#011train-error:0.164733#011validation-error:0.1883 [05:34:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [30]#011train-error:0.1636#011validation-error:0.1862 [05:34:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.162333#011validation-error:0.1849 [05:34:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [32]#011train-error:0.161133#011validation-error:0.1835 [05:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [33]#011train-error:0.159733#011validation-error:0.1826 [05:34:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [34]#011train-error:0.158733#011validation-error:0.1808 [05:34:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [35]#011train-error:0.1576#011validation-error:0.1802 [05:34:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [36]#011train-error:0.157733#011validation-error:0.1808 [05:34:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5 [37]#011train-error:0.155333#011validation-error:0.18 [05:34:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [38]#011train-error:0.154667#011validation-error:0.1789 [05:34:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [39]#011train-error:0.154267#011validation-error:0.1785 [05:34:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5 [40]#011train-error:0.152733#011validation-error:0.1777 [05:34:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [41]#011train-error:0.151667#011validation-error:0.1763 [05:34:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [42]#011train-error:0.1502#011validation-error:0.1769 [05:34:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [43]#011train-error:0.148867#011validation-error:0.1761 [05:34:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [44]#011train-error:0.146133#011validation-error:0.1752 [05:35:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [45]#011train-error:0.1454#011validation-error:0.1737 [05:35:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5 [46]#011train-error:0.144667#011validation-error:0.174 [05:35:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [47]#011train-error:0.1434#011validation-error:0.1728 [05:35:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5 [48]#011train-error:0.142667#011validation-error:0.1709 [05:35:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [49]#011train-error:0.142133#011validation-error:0.1703 [05:35:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [50]#011train-error:0.141#011validation-error:0.1706 [05:35:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [51]#011train-error:0.139867#011validation-error:0.1691 [05:35:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [52]#011train-error:0.1378#011validation-error:0.169 [05:35:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [53]#011train-error:0.1372#011validation-error:0.1686 [05:35:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [54]#011train-error:0.1358#011validation-error:0.167 [05:35:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5 [55]#011train-error:0.135#011validation-error:0.1658 [05:35:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [56]#011train-error:0.1336#011validation-error:0.1663 [05:35:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [57]#011train-error:0.133267#011validation-error:0.1667 [05:35:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [58]#011train-error:0.132333#011validation-error:0.1643 [05:35:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [59]#011train-error:0.131267#011validation-error:0.1645 [05:35:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [60]#011train-error:0.131133#011validation-error:0.1641 [05:35:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=5 [61]#011train-error:0.130867#011validation-error:0.163 [05:35:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [62]#011train-error:0.130333#011validation-error:0.1632 [05:35:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [63]#011train-error:0.1292#011validation-error:0.1638 [05:35:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [64]#011train-error:0.128467#011validation-error:0.1629 [05:35:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [65]#011train-error:0.127467#011validation-error:0.1617 [05:35:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [66]#011train-error:0.126467#011validation-error:0.1607 [05:35:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5 [67]#011train-error:0.126#011validation-error:0.1598 [05:35:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5 [68]#011train-error:0.125867#011validation-error:0.1607 [05:35:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5 [69]#011train-error:0.125333#011validation-error:0.1593 [05:35:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [70]#011train-error:0.125#011validation-error:0.1596 [05:35:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5 [71]#011train-error:0.124267#011validation-error:0.159 [05:35:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=5 [72]#011train-error:0.123667#011validation-error:0.1586 [05:35:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [73]#011train-error:0.123467#011validation-error:0.1575 [05:35:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [74]#011train-error:0.1222#011validation-error:0.1584 [05:35:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [75]#011train-error:0.121067#011validation-error:0.1574 [05:35:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [76]#011train-error:0.119733#011validation-error:0.1569 [05:35:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [77]#011train-error:0.118933#011validation-error:0.157 [05:35:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [78]#011train-error:0.1178#011validation-error:0.1569 [05:35:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5 [79]#011train-error:0.116333#011validation-error:0.1565 [05:35:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [80]#011train-error:0.115733#011validation-error:0.1566 [05:35:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [81]#011train-error:0.115533#011validation-error:0.1564 [05:35:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [82]#011train-error:0.115333#011validation-error:0.157 [05:35:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [83]#011train-error:0.1148#011validation-error:0.1574 [05:35:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [84]#011train-error:0.1152#011validation-error:0.1564 [05:35:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [85]#011train-error:0.114467#011validation-error:0.1561 [05:35:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [86]#011train-error:0.114#011validation-error:0.1555 [05:35:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5 [87]#011train-error:0.114067#011validation-error:0.155 [05:35:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [88]#011train-error:0.1138#011validation-error:0.1547 [05:35:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [89]#011train-error:0.1132#011validation-error:0.1549 [05:35:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [90]#011train-error:0.111667#011validation-error:0.1537 [05:35:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [91]#011train-error:0.112067#011validation-error:0.1534 [05:35:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [92]#011train-error:0.110267#011validation-error:0.1542 [05:36:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 12 pruned nodes, max_depth=5 [93]#011train-error:0.111067#011validation-error:0.1554 [05:36:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [94]#011train-error:0.1106#011validation-error:0.1549 [05:36:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [95]#011train-error:0.1098#011validation-error:0.1544 [05:36:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 0 pruned nodes, max_depth=5 [96]#011train-error:0.109267#011validation-error:0.1536 [05:36:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5 [97]#011train-error:0.107933#011validation-error:0.1532 [05:36:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [98]#011train-error:0.107867#011validation-error:0.1526 [05:36:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 18 pruned nodes, max_depth=5 [99]#011train-error:0.107133#011validation-error:0.1531 [05:36:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [100]#011train-error:0.106533#011validation-error:0.1528 [05:36:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [101]#011train-error:0.1058#011validation-error:0.1521 [05:36:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [102]#011train-error:0.1054#011validation-error:0.1528 [05:36:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [103]#011train-error:0.105333#011validation-error:0.1521 [05:36:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [104]#011train-error:0.104533#011validation-error:0.1521 [05:36:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [105]#011train-error:0.104133#011validation-error:0.1522 [05:36:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5 [106]#011train-error:0.1042#011validation-error:0.1521 [05:36:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5 [107]#011train-error:0.1036#011validation-error:0.1514 [05:36:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [108]#011train-error:0.103467#011validation-error:0.1509 [05:36:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [109]#011train-error:0.102467#011validation-error:0.15 [05:36:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5 [110]#011train-error:0.1022#011validation-error:0.1499 [05:36:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [111]#011train-error:0.101933#011validation-error:0.1504 [05:36:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [112]#011train-error:0.100933#011validation-error:0.1499 [05:36:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5 [113]#011train-error:0.100333#011validation-error:0.1501 [05:36:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5 [114]#011train-error:0.100267#011validation-error:0.1493 [05:36:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5 [115]#011train-error:0.099867#011validation-error:0.1492 [05:36:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [116]#011train-error:0.099133#011validation-error:0.1487 [05:36:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=5 [117]#011train-error:0.099267#011validation-error:0.1495 [05:36:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [118]#011train-error:0.098667#011validation-error:0.1486 [05:36:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [119]#011train-error:0.098#011validation-error:0.1488 [05:36:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [120]#011train-error:0.097867#011validation-error:0.1485 [05:36:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [121]#011train-error:0.097133#011validation-error:0.1489 [05:36:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 24 pruned nodes, max_depth=5 [122]#011train-error:0.096867#011validation-error:0.1493 [05:36:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 24 pruned nodes, max_depth=5 [123]#011train-error:0.096733#011validation-error:0.148 [05:36:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [124]#011train-error:0.0956#011validation-error:0.1479 [05:36:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=5 [125]#011train-error:0.095667#011validation-error:0.147 [05:36:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [126]#011train-error:0.0956#011validation-error:0.1465 [05:36:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [127]#011train-error:0.0956#011validation-error:0.1465 [05:36:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5 [128]#011train-error:0.095733#011validation-error:0.1459 [05:36:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=5 [129]#011train-error:0.0952#011validation-error:0.1466 [05:36:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [130]#011train-error:0.095067#011validation-error:0.1463 [05:36:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [131]#011train-error:0.094867#011validation-error:0.1464 [05:36:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [132]#011train-error:0.094533#011validation-error:0.1462 [05:36:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [133]#011train-error:0.093667#011validation-error:0.1454 [05:36:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [134]#011train-error:0.093333#011validation-error:0.1456 [05:36:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5 [135]#011train-error:0.093333#011validation-error:0.1458 [05:36:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [136]#011train-error:0.093267#011validation-error:0.1463 [05:36:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [137]#011train-error:0.0932#011validation-error:0.1465 [05:36:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 16 pruned nodes, max_depth=5 [138]#011train-error:0.092467#011validation-error:0.1466 [05:36:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [139]#011train-error:0.0924#011validation-error:0.147 [05:37:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [140]#011train-error:0.091733#011validation-error:0.1463 [05:37:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [141]#011train-error:0.090933#011validation-error:0.1453 [05:37:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [142]#011train-error:0.0906#011validation-error:0.1443 [05:37:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [143]#011train-error:0.09#011validation-error:0.1433 [05:37:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5 [144]#011train-error:0.089533#011validation-error:0.1437 [05:37:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=5 [145]#011train-error:0.0892#011validation-error:0.1436 [05:37:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [146]#011train-error:0.089133#011validation-error:0.1433 [05:37:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 16 pruned nodes, max_depth=5 [147]#011train-error:0.089#011validation-error:0.1431 [05:37:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5 [148]#011train-error:0.088933#011validation-error:0.1431 [05:37:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5 [149]#011train-error:0.089#011validation-error:0.1434 [05:37:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [150]#011train-error:0.0886#011validation-error:0.1432 [05:37:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 20 pruned nodes, max_depth=5 [151]#011train-error:0.0884#011validation-error:0.1439 [05:37:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [152]#011train-error:0.087867#011validation-error:0.1435 [05:37:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [153]#011train-error:0.087267#011validation-error:0.1437 [05:37:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5 [154]#011train-error:0.087267#011validation-error:0.1433 [05:37:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [155]#011train-error:0.0872#011validation-error:0.1432 [05:37:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5 [156]#011train-error:0.087067#011validation-error:0.143 [05:37:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [157]#011train-error:0.0868#011validation-error:0.1423 [05:37:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5 [158]#011train-error:0.0864#011validation-error:0.1428 [05:37:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=5 [159]#011train-error:0.085933#011validation-error:0.1425 [05:37:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [160]#011train-error:0.086#011validation-error:0.1426 [05:37:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [161]#011train-error:0.0854#011validation-error:0.1423 [05:37:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [162]#011train-error:0.0852#011validation-error:0.1428 [05:37:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5 [163]#011train-error:0.084867#011validation-error:0.1424 [05:37:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 24 pruned nodes, max_depth=5 [164]#011train-error:0.084467#011validation-error:0.1423 [05:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5 [165]#011train-error:0.0844#011validation-error:0.1417 [05:37:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 16 pruned nodes, max_depth=5 [166]#011train-error:0.0846#011validation-error:0.1423 [05:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [167]#011train-error:0.084267#011validation-error:0.1423 [05:37:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [168]#011train-error:0.083667#011validation-error:0.1412 [05:37:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 16 pruned nodes, max_depth=5 [169]#011train-error:0.0836#011validation-error:0.1418 [05:37:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5 [170]#011train-error:0.083533#011validation-error:0.1415 [05:37:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5 [171]#011train-error:0.083333#011validation-error:0.1409 [05:37:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [172]#011train-error:0.082933#011validation-error:0.14 [05:37:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [173]#011train-error:0.0826#011validation-error:0.1402 [05:37:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [174]#011train-error:0.0824#011validation-error:0.1401 [05:37:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5 [175]#011train-error:0.0826#011validation-error:0.1403 [05:37:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [176]#011train-error:0.082#011validation-error:0.1407 [05:37:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [177]#011train-error:0.081533#011validation-error:0.14 [05:37:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5 [178]#011train-error:0.081333#011validation-error:0.1398 [05:37:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5 [179]#011train-error:0.080867#011validation-error:0.1401 [05:37:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [180]#011train-error:0.080667#011validation-error:0.1401 [05:37:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5 [181]#011train-error:0.079933#011validation-error:0.1393 [05:37:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5 [182]#011train-error:0.080267#011validation-error:0.1398 [05:37:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5 [183]#011train-error:0.079933#011validation-error:0.1396 [05:37:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5 [184]#011train-error:0.080133#011validation-error:0.1399 [05:37:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [185]#011train-error:0.0794#011validation-error:0.1396 [05:37:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [186]#011train-error:0.079467#011validation-error:0.1402 [05:38:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5 [187]#011train-error:0.079267#011validation-error:0.1397 [05:38:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=5 [188]#011train-error:0.079133#011validation-error:0.14 [05:38:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5 [189]#011train-error:0.079067#011validation-error:0.1403 [05:38:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [190]#011train-error:0.079333#011validation-error:0.1401 [05:38:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 18 pruned nodes, max_depth=5 [191]#011train-error:0.0788#011validation-error:0.1405 Stopping. Best iteration: [181]#011train-error:0.079933#011validation-error:0.1393  2020-03-13 05:38:16 Uploading - Uploading generated training model 2020-03-13 05:38:16 Completed - Training job completed Training seconds: 314 Billable seconds: 314 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .....................Arguments: serve [2020-03-13 05:42:09 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-03-13 05:42:09 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-03-13 05:42:09 +0000] [1] [INFO] Using worker: gevent [2020-03-13 05:42:09 +0000] [38] [INFO] Booting worker with pid: 38 [2020-03-13 05:42:09 +0000] [39] [INFO] Booting worker with pid: 39 [2020-03-13 05:42:09 +0000] [40] [INFO] Booting worker with pid: 40 [2020-03-13 05:42:09 +0000] [41] [INFO] Booting worker with pid: 41 [2020-03-13:05:42:09:INFO] Model loaded successfully for worker : 38 [2020-03-13:05:42:09:INFO] Model loaded successfully for worker : 40 [2020-03-13:05:42:09:INFO] Model loaded successfully for worker : 39 [2020-03-13:05:42:09:INFO] Model loaded successfully for worker : 41 2020-03-13T05:42:42.921:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-03-13:05:42:45:INFO] Sniff delimiter as ',' [2020-03-13:05:42:45:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:45:INFO] Sniff delimiter as ',' [2020-03-13:05:42:45:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:45:INFO] Sniff delimiter as ',' [2020-03-13:05:42:45:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:45:INFO] Sniff delimiter as ',' [2020-03-13:05:42:45:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:45:INFO] Sniff delimiter as ',' [2020-03-13:05:42:45:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:45:INFO] Sniff delimiter as ',' [2020-03-13:05:42:45:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:46:INFO] Sniff delimiter as ',' [2020-03-13:05:42:46:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:46:INFO] Sniff delimiter as ',' [2020-03-13:05:42:46:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:48:INFO] Sniff delimiter as ',' [2020-03-13:05:42:48:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:48:INFO] Sniff delimiter as ',' [2020-03-13:05:42:48:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:48:INFO] Sniff delimiter as ',' [2020-03-13:05:42:48:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:48:INFO] Sniff delimiter as ',' [2020-03-13:05:42:48:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:48:INFO] Sniff delimiter as ',' [2020-03-13:05:42:48:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:48:INFO] Sniff delimiter as ',' [2020-03-13:05:42:48:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:48:INFO] Sniff delimiter as ',' [2020-03-13:05:42:48:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:48:INFO] Sniff delimiter as ',' [2020-03-13:05:42:48:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:50:INFO] Sniff delimiter as ',' [2020-03-13:05:42:50:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:51:INFO] Sniff delimiter as ',' [2020-03-13:05:42:51:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:50:INFO] Sniff delimiter as ',' [2020-03-13:05:42:50:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:51:INFO] Sniff delimiter as ',' [2020-03-13:05:42:51:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:53:INFO] Sniff delimiter as ',' [2020-03-13:05:42:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:53:INFO] Sniff delimiter as ',' [2020-03-13:05:42:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:53:INFO] Sniff delimiter as ',' [2020-03-13:05:42:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:53:INFO] Sniff delimiter as ',' [2020-03-13:05:42:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:53:INFO] Sniff delimiter as ',' [2020-03-13:05:42:53:INFO] Sniff delimiter as ',' [2020-03-13:05:42:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:53:INFO] Sniff delimiter as ',' [2020-03-13:05:42:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:53:INFO] Sniff delimiter as ',' [2020-03-13:05:42:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:55:INFO] Sniff delimiter as ',' [2020-03-13:05:42:55:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:55:INFO] Sniff delimiter as ',' [2020-03-13:05:42:55:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:55:INFO] Sniff delimiter as ',' [2020-03-13:05:42:55:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:56:INFO] Sniff delimiter as ',' [2020-03-13:05:42:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:55:INFO] Sniff delimiter as ',' [2020-03-13:05:42:55:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:55:INFO] Sniff delimiter as ',' [2020-03-13:05:42:55:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:55:INFO] Sniff delimiter as ',' [2020-03-13:05:42:55:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:56:INFO] Sniff delimiter as ',' [2020-03-13:05:42:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:58:INFO] Sniff delimiter as ',' [2020-03-13:05:42:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:58:INFO] Sniff delimiter as ',' [2020-03-13:05:42:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:58:INFO] Sniff delimiter as ',' [2020-03-13:05:42:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:58:INFO] Sniff delimiter as ',' [2020-03-13:05:42:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:58:INFO] Sniff delimiter as ',' [2020-03-13:05:42:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:58:INFO] Sniff delimiter as ',' [2020-03-13:05:42:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:58:INFO] Sniff delimiter as ',' [2020-03-13:05:42:58:INFO] Sniff delimiter as ',' [2020-03-13:05:42:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:42:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:00:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:00:INFO] Sniff delimiter as ',' [2020-03-13:05:43:00:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:00:INFO] Sniff delimiter as ',' [2020-03-13:05:43:00:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:01:INFO] Sniff delimiter as ',' [2020-03-13:05:43:00:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:00:INFO] Sniff delimiter as ',' [2020-03-13:05:43:00:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:00:INFO] Sniff delimiter as ',' [2020-03-13:05:43:00:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:01:INFO] Sniff delimiter as ',' [2020-03-13:05:43:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:02:INFO] Sniff delimiter as ',' [2020-03-13:05:43:02:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:03:INFO] Sniff delimiter as ',' [2020-03-13:05:43:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:03:INFO] Sniff delimiter as ',' [2020-03-13:05:43:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:02:INFO] Sniff delimiter as ',' [2020-03-13:05:43:02:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:03:INFO] Sniff delimiter as ',' [2020-03-13:05:43:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:03:INFO] Sniff delimiter as ',' [2020-03-13:05:43:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:03:INFO] Sniff delimiter as ',' [2020-03-13:05:43:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:03:INFO] Sniff delimiter as ',' [2020-03-13:05:43:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:05:INFO] Sniff delimiter as ',' [2020-03-13:05:43:05:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:05:INFO] Sniff delimiter as ',' [2020-03-13:05:43:05:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:05:INFO] Sniff delimiter as ',' [2020-03-13:05:43:05:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:05:INFO] Sniff delimiter as ',' [2020-03-13:05:43:05:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:06:INFO] Sniff delimiter as ',' [2020-03-13:05:43:06:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:05:INFO] Sniff delimiter as ',' [2020-03-13:05:43:05:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:05:INFO] Sniff delimiter as ',' [2020-03-13:05:43:05:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:06:INFO] Sniff delimiter as ',' [2020-03-13:05:43:06:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:07:INFO] Sniff delimiter as ',' [2020-03-13:05:43:07:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:07:INFO] Sniff delimiter as ',' [2020-03-13:05:43:07:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:08:INFO] Sniff delimiter as ',' [2020-03-13:05:43:08:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:07:INFO] Sniff delimiter as ',' [2020-03-13:05:43:07:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:07:INFO] Sniff delimiter as ',' [2020-03-13:05:43:07:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:08:INFO] Sniff delimiter as ',' [2020-03-13:05:43:08:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:08:INFO] Sniff delimiter as ',' [2020-03-13:05:43:08:INFO] Sniff delimiter as ',' [2020-03-13:05:43:08:INFO] Determined delimiter of CSV input is ',' [2020-03-13:05:43:08:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-788544388985/xgboost-2020-03-13-05-38-44-360/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer = lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output .........................Arguments: serve [2020-03-13 06:12:26 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-03-13 06:12:26 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-03-13 06:12:26 +0000] [1] [INFO] Using worker: gevent [2020-03-13 06:12:26 +0000] [38] [INFO] Booting worker with pid: 38 [2020-03-13 06:12:26 +0000] [39] [INFO] Booting worker with pid: 39 [2020-03-13 06:12:26 +0000] [40] [INFO] Booting worker with pid: 40 [2020-03-13:06:12:26:INFO] Model loaded successfully for worker : 38 [2020-03-13 06:12:26 +0000] [41] [INFO] Booting worker with pid: 41 [2020-03-13:06:12:27:INFO] Model loaded successfully for worker : 39 [2020-03-13:06:12:27:INFO] Model loaded successfully for worker : 40 [2020-03-13:06:12:27:INFO] Model loaded successfully for worker : 41 [2020-03-13:06:12:56:INFO] Sniff delimiter as ',' [2020-03-13:06:12:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:56:INFO] Sniff delimiter as ',' [2020-03-13:06:12:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:56:INFO] Sniff delimiter as ',' [2020-03-13:06:12:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:56:INFO] Sniff delimiter as ',' [2020-03-13:06:12:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:56:INFO] Sniff delimiter as ',' [2020-03-13:06:12:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:56:INFO] Sniff delimiter as ',' [2020-03-13:06:12:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:56:INFO] Sniff delimiter as ',' [2020-03-13:06:12:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:56:INFO] Sniff delimiter as ',' [2020-03-13:06:12:56:INFO] Determined delimiter of CSV input is ',' 2020-03-13T06:12:53.394:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-03-13:06:12:58:INFO] Sniff delimiter as ',' [2020-03-13:06:12:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:58:INFO] Sniff delimiter as ',' [2020-03-13:06:12:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:58:INFO] Sniff delimiter as ',' [2020-03-13:06:12:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:59:INFO] Sniff delimiter as ',' [2020-03-13:06:12:59:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:58:INFO] Sniff delimiter as ',' [2020-03-13:06:12:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:58:INFO] Sniff delimiter as ',' [2020-03-13:06:12:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:58:INFO] Sniff delimiter as ',' [2020-03-13:06:12:58:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:12:59:INFO] Sniff delimiter as ',' [2020-03-13:06:12:59:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:01:INFO] Sniff delimiter as ',' [2020-03-13:06:13:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:01:INFO] Sniff delimiter as ',' [2020-03-13:06:13:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:01:INFO] Sniff delimiter as ',' [2020-03-13:06:13:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:01:INFO] Sniff delimiter as ',' [2020-03-13:06:13:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:01:INFO] Sniff delimiter as ',' [2020-03-13:06:13:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:01:INFO] Sniff delimiter as ',' [2020-03-13:06:13:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:01:INFO] Sniff delimiter as ',' [2020-03-13:06:13:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:01:INFO] Sniff delimiter as ',' [2020-03-13:06:13:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:03:INFO] Sniff delimiter as ',' [2020-03-13:06:13:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:03:INFO] Sniff delimiter as ',' [2020-03-13:06:13:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:03:INFO] Sniff delimiter as ',' [2020-03-13:06:13:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:04:INFO] Sniff delimiter as ',' [2020-03-13:06:13:04:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:03:INFO] Sniff delimiter as ',' [2020-03-13:06:13:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:03:INFO] Sniff delimiter as ',' [2020-03-13:06:13:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:03:INFO] Sniff delimiter as ',' [2020-03-13:06:13:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:04:INFO] Sniff delimiter as ',' [2020-03-13:06:13:04:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:08:INFO] Sniff delimiter as ',' [2020-03-13:06:13:08:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:08:INFO] Sniff delimiter as ',' [2020-03-13:06:13:08:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:08:INFO] Sniff delimiter as ',' [2020-03-13:06:13:08:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:09:INFO] Sniff delimiter as ',' [2020-03-13:06:13:08:INFO] Sniff delimiter as ',' [2020-03-13:06:13:08:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:08:INFO] Sniff delimiter as ',' [2020-03-13:06:13:08:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:08:INFO] Sniff delimiter as ',' [2020-03-13:06:13:08:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:09:INFO] Sniff delimiter as ',' [2020-03-13:06:13:09:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:09:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:10:INFO] Sniff delimiter as ',' [2020-03-13:06:13:10:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:10:INFO] Sniff delimiter as ',' [2020-03-13:06:13:10:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:10:INFO] Sniff delimiter as ',' [2020-03-13:06:13:10:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:10:INFO] Sniff delimiter as ',' [2020-03-13:06:13:10:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:11:INFO] Sniff delimiter as ',' [2020-03-13:06:13:11:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:11:INFO] Sniff delimiter as ',' [2020-03-13:06:13:11:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:11:INFO] Sniff delimiter as ',' [2020-03-13:06:13:11:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:11:INFO] Sniff delimiter as ',' [2020-03-13:06:13:11:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:13:INFO] Sniff delimiter as ',' [2020-03-13:06:13:13:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:13:INFO] Sniff delimiter as ',' [2020-03-13:06:13:13:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:13:INFO] Sniff delimiter as ',' [2020-03-13:06:13:13:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:13:INFO] Sniff delimiter as ',' [2020-03-13:06:13:13:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:14:INFO] Sniff delimiter as ',' [2020-03-13:06:13:14:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:13:INFO] Sniff delimiter as ',' [2020-03-13:06:13:13:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:13:INFO] Sniff delimiter as ',' [2020-03-13:06:13:13:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:14:INFO] Sniff delimiter as ',' [2020-03-13:06:13:14:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:16:INFO] Sniff delimiter as ',' [2020-03-13:06:13:16:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:16:INFO] Sniff delimiter as ',' [2020-03-13:06:13:16:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:18:INFO] Sniff delimiter as ',' [2020-03-13:06:13:18:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:18:INFO] Sniff delimiter as ',' [2020-03-13:06:13:18:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:18:INFO] Sniff delimiter as ',' [2020-03-13:06:13:18:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:18:INFO] Sniff delimiter as ',' [2020-03-13:06:13:18:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:18:INFO] Sniff delimiter as ',' [2020-03-13:06:13:18:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:18:INFO] Sniff delimiter as ',' [2020-03-13:06:13:18:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:18:INFO] Sniff delimiter as ',' [2020-03-13:06:13:18:INFO] Determined delimiter of CSV input is ',' [2020-03-13:06:13:18:INFO] Sniff delimiter as ',' [2020-03-13:06:13:18:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-788544388985/xgboost-2020-03-13-06-08-28-581/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-03-13-05-30-59-270 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) print(next(gn)) print(next(gn)) ###Output (['one', 'amaz', 'movi', 'realiz', 'chines', 'folklor', 'complic', 'philosoph', 'alway', 'stori', 'behind', 'stori', 'understand', 'everyth', 'know', 'chines', 'folklor', 'studi', 'school', 'complic', 'take', 'give', 'enjoy', 'movi', 'enjoy', 'ride', 'hooray', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'victorian', 'weari', 'reincarn', 'playboy', 'spill', '21st', 'ghetto'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'banana', 'dubiou', 'sophi', 'optimist', 'omin', 'orchestr', 'masterson'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data.**Answer** I think that the new data has a few letters missing therefore, making new words that affect the classification (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, role, train_instance_count=1, train_instance_type='ml.m4.xlarge', output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-03-13 06:53:27 Starting - Starting the training job... 2020-03-13 06:53:28 Starting - Launching requested ML instances......... 2020-03-13 06:54:59 Starting - Preparing the instances for training... 2020-03-13 06:55:57 Downloading - Downloading input data...... 2020-03-13 06:56:49 Training - Training image download completed. Training in progress.Arguments: train [2020-03-13:06:56:49:INFO] Running standalone xgboost training. [2020-03-13:06:56:49:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8499.67mb [2020-03-13:06:56:49:INFO] Determined delimiter of CSV input is ',' [06:56:49] S3DistributionType set as FullyReplicated [06:56:51] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-03-13:06:56:51:INFO] Determined delimiter of CSV input is ',' [06:56:51] S3DistributionType set as FullyReplicated [06:56:52] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [06:56:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.302733#011validation-error:0.3072 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [06:56:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [1]#011train-error:0.285467#011validation-error:0.2914 [06:56:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.272067#011validation-error:0.2819 [06:57:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.2696#011validation-error:0.2789 [06:57:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 10 pruned nodes, max_depth=5 [4]#011train-error:0.267067#011validation-error:0.2769 [06:57:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.254667#011validation-error:0.2641 [06:57:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [6]#011train-error:0.253267#011validation-error:0.2605 [06:57:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.250667#011validation-error:0.2568 [06:57:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 2 pruned nodes, max_depth=5 [8]#011train-error:0.241667#011validation-error:0.2512 [06:57:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [9]#011train-error:0.235533#011validation-error:0.2461 [06:57:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [10]#011train-error:0.2288#011validation-error:0.2409 [06:57:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [11]#011train-error:0.2246#011validation-error:0.2361 [06:57:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [12]#011train-error:0.2186#011validation-error:0.2322 [06:57:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [13]#011train-error:0.218733#011validation-error:0.2339 [06:57:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [14]#011train-error:0.217133#011validation-error:0.2298 [06:57:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [15]#011train-error:0.214267#011validation-error:0.2292 [06:57:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [16]#011train-error:0.212467#011validation-error:0.2257 [06:57:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [17]#011train-error:0.209667#011validation-error:0.2228 [06:57:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [18]#011train-error:0.205333#011validation-error:0.2199 [06:57:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [19]#011train-error:0.201867#011validation-error:0.2181 [06:57:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5 [20]#011train-error:0.199467#011validation-error:0.2162 [06:57:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.195733#011validation-error:0.2146 [06:57:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [22]#011train-error:0.193067#011validation-error:0.213 [06:57:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.190333#011validation-error:0.2107 [06:57:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.1878#011validation-error:0.2057 [06:57:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.187133#011validation-error:0.2053 [06:57:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [26]#011train-error:0.187#011validation-error:0.2037 [06:57:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [27]#011train-error:0.182467#011validation-error:0.2039 [06:57:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.180533#011validation-error:0.2036 [06:57:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.180267#011validation-error:0.2021 [06:57:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [30]#011train-error:0.1776#011validation-error:0.2001 [06:57:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.1756#011validation-error:0.1988 [06:57:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.175133#011validation-error:0.1974 [06:57:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [33]#011train-error:0.1736#011validation-error:0.1977 [06:57:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.170067#011validation-error:0.1967 [06:57:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [35]#011train-error:0.168467#011validation-error:0.1964 [06:57:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [36]#011train-error:0.168867#011validation-error:0.1939 [06:57:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.168#011validation-error:0.1935 [06:57:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [38]#011train-error:0.1656#011validation-error:0.1929 [06:57:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5 [39]#011train-error:0.1662#011validation-error:0.194 [06:57:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [40]#011train-error:0.164867#011validation-error:0.1929 [06:57:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [41]#011train-error:0.1626#011validation-error:0.1926 [06:57:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [42]#011train-error:0.159867#011validation-error:0.1918 [06:57:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [43]#011train-error:0.1596#011validation-error:0.1916 [06:57:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [44]#011train-error:0.1586#011validation-error:0.1912 [06:57:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [45]#011train-error:0.157267#011validation-error:0.19 [06:57:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [46]#011train-error:0.155733#011validation-error:0.1899 [06:57:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5 [47]#011train-error:0.155733#011validation-error:0.1887 [06:57:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [48]#011train-error:0.154467#011validation-error:0.1868 [06:57:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 16 pruned nodes, max_depth=5 [49]#011train-error:0.153467#011validation-error:0.1852 [06:57:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [50]#011train-error:0.151867#011validation-error:0.1841 [06:58:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [51]#011train-error:0.1498#011validation-error:0.184 [06:58:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [52]#011train-error:0.1496#011validation-error:0.1837 [06:58:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [53]#011train-error:0.148533#011validation-error:0.1832 [06:58:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5 [54]#011train-error:0.148333#011validation-error:0.1834 [06:58:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [55]#011train-error:0.147667#011validation-error:0.1837 [06:58:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [56]#011train-error:0.147#011validation-error:0.1829 [06:58:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [57]#011train-error:0.145467#011validation-error:0.1828 [06:58:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [58]#011train-error:0.144733#011validation-error:0.1833 [06:58:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [59]#011train-error:0.145467#011validation-error:0.1843 [06:58:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [60]#011train-error:0.144533#011validation-error:0.1824 [06:58:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [61]#011train-error:0.144#011validation-error:0.183 [06:58:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [62]#011train-error:0.1438#011validation-error:0.1835 [06:58:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [63]#011train-error:0.1436#011validation-error:0.1828 [06:58:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [64]#011train-error:0.143467#011validation-error:0.1835 [06:58:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5 [65]#011train-error:0.143133#011validation-error:0.1833 [06:58:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5 [66]#011train-error:0.1422#011validation-error:0.1823 [06:58:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [67]#011train-error:0.141733#011validation-error:0.1822 [06:58:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [68]#011train-error:0.141267#011validation-error:0.182 [06:58:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5 [69]#011train-error:0.1408#011validation-error:0.1824 [06:58:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [70]#011train-error:0.140267#011validation-error:0.1817 [06:58:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [71]#011train-error:0.139467#011validation-error:0.1824 [06:58:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [72]#011train-error:0.138733#011validation-error:0.1834 [06:58:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [73]#011train-error:0.137667#011validation-error:0.1815 [06:58:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [74]#011train-error:0.1368#011validation-error:0.1809 [06:58:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [75]#011train-error:0.136533#011validation-error:0.1803 [06:58:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 16 pruned nodes, max_depth=5 [76]#011train-error:0.136467#011validation-error:0.1787 [06:58:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5 [77]#011train-error:0.1364#011validation-error:0.1785 [06:58:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5 [78]#011train-error:0.1362#011validation-error:0.1784 [06:58:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [79]#011train-error:0.136467#011validation-error:0.1796 [06:58:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5 [80]#011train-error:0.135533#011validation-error:0.1786 [06:58:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [81]#011train-error:0.135133#011validation-error:0.1788 [06:58:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [82]#011train-error:0.1348#011validation-error:0.179 [06:58:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [83]#011train-error:0.134667#011validation-error:0.1787 [06:58:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [84]#011train-error:0.134067#011validation-error:0.1778 [06:58:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5 [85]#011train-error:0.133267#011validation-error:0.1778 [06:58:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5 [86]#011train-error:0.132667#011validation-error:0.1779 [06:58:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [87]#011train-error:0.132067#011validation-error:0.1771 [06:58:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5 [88]#011train-error:0.131933#011validation-error:0.1769 [06:58:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [89]#011train-error:0.1314#011validation-error:0.1752 [06:58:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [90]#011train-error:0.131467#011validation-error:0.1755 [06:58:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5 [91]#011train-error:0.131#011validation-error:0.1751 [06:58:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5 [92]#011train-error:0.130733#011validation-error:0.1756 [06:58:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [93]#011train-error:0.130667#011validation-error:0.1754 [06:58:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5 [94]#011train-error:0.13#011validation-error:0.1761 [06:58:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [95]#011train-error:0.128533#011validation-error:0.1761 [06:58:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [96]#011train-error:0.128067#011validation-error:0.1758 [06:58:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [97]#011train-error:0.1276#011validation-error:0.176 [06:58:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [98]#011train-error:0.128467#011validation-error:0.1753 [06:59:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [99]#011train-error:0.127467#011validation-error:0.1754 [06:59:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [100]#011train-error:0.126933#011validation-error:0.1751 [06:59:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [101]#011train-error:0.126933#011validation-error:0.1752 Stopping. Best iteration: [91]#011train-error:0.131#011validation-error:0.1751  2020-03-13 06:59:13 Uploading - Uploading generated training model 2020-03-13 06:59:13 Completed - Training job completed Training seconds: 196 Billable seconds: 196 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem?**Answer:** By splitting the data before training into _test, train and validation_. First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count=1, instance_type='ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-03-13-06-53-27-132 ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ......................Arguments: serve [2020-03-13 07:03:12 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-03-13 07:03:12 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-03-13 07:03:12 +0000] [1] [INFO] Using worker: gevent [2020-03-13 07:03:12 +0000] [38] [INFO] Booting worker with pid: 38 [2020-03-13 07:03:12 +0000] [39] [INFO] Booting worker with pid: 39 [2020-03-13:07:03:12:INFO] Model loaded successfully for worker : 38 [2020-03-13 07:03:12 +0000] [40] [INFO] Booting worker with pid: 40 [2020-03-13:07:03:12:INFO] Model loaded successfully for worker : 39 [2020-03-13 07:03:12 +0000] [41] [INFO] Booting worker with pid: 41 [2020-03-13:07:03:12:INFO] Model loaded successfully for worker : 40 [2020-03-13:07:03:13:INFO] Model loaded successfully for worker : 41 2020-03-13T07:03:51.404:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-03-13:07:03:53:INFO] Sniff delimiter as ',' [2020-03-13:07:03:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:53:INFO] Sniff delimiter as ',' [2020-03-13:07:03:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:53:INFO] Sniff delimiter as ',' [2020-03-13:07:03:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:53:INFO] Sniff delimiter as ',' [2020-03-13:07:03:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:53:INFO] Sniff delimiter as ',' [2020-03-13:07:03:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:53:INFO] Sniff delimiter as ',' [2020-03-13:07:03:53:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:54:INFO] Sniff delimiter as ',' [2020-03-13:07:03:54:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:54:INFO] Sniff delimiter as ',' [2020-03-13:07:03:54:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:56:INFO] Sniff delimiter as ',' [2020-03-13:07:03:56:INFO] Sniff delimiter as ',' [2020-03-13:07:03:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:56:INFO] Sniff delimiter as ',' [2020-03-13:07:03:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:56:INFO] Sniff delimiter as ',' [2020-03-13:07:03:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:57:INFO] Sniff delimiter as ',' [2020-03-13:07:03:57:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:56:INFO] Sniff delimiter as ',' [2020-03-13:07:03:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:56:INFO] Sniff delimiter as ',' [2020-03-13:07:03:56:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:57:INFO] Sniff delimiter as ',' [2020-03-13:07:03:57:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:59:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:03:59:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:01:INFO] Sniff delimiter as ',' [2020-03-13:07:04:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:01:INFO] Sniff delimiter as ',' [2020-03-13:07:04:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:01:INFO] Sniff delimiter as ',' [2020-03-13:07:04:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:01:INFO] Sniff delimiter as ',' [2020-03-13:07:04:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:01:INFO] Sniff delimiter as ',' [2020-03-13:07:04:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:01:INFO] Sniff delimiter as ',' [2020-03-13:07:04:01:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:02:INFO] Sniff delimiter as ',' [2020-03-13:07:04:02:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:02:INFO] Sniff delimiter as ',' [2020-03-13:07:04:02:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:03:INFO] Sniff delimiter as ',' [2020-03-13:07:04:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:03:INFO] Sniff delimiter as ',' [2020-03-13:07:04:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:03:INFO] Sniff delimiter as ',' [2020-03-13:07:04:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:03:INFO] Sniff delimiter as ',' [2020-03-13:07:04:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:03:INFO] Sniff delimiter as ',' [2020-03-13:07:04:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:03:INFO] Sniff delimiter as ',' [2020-03-13:07:04:03:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:04:INFO] Sniff delimiter as ',' [2020-03-13:07:04:04:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:04:INFO] Sniff delimiter as ',' [2020-03-13:07:04:04:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:05:INFO] Sniff delimiter as ',' [2020-03-13:07:04:05:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:05:INFO] Sniff delimiter as ',' [2020-03-13:07:04:05:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:06:INFO] Sniff delimiter as ',' [2020-03-13:07:04:06:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:06:INFO] Sniff delimiter as ',' [2020-03-13:07:04:06:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:06:INFO] Sniff delimiter as ',' [2020-03-13:07:04:06:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:06:INFO] Sniff delimiter as ',' [2020-03-13:07:04:06:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:06:INFO] Sniff delimiter as ',' [2020-03-13:07:04:06:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:06:INFO] Sniff delimiter as ',' [2020-03-13:07:04:06:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:09:INFO] Sniff delimiter as ',' [2020-03-13:07:04:09:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:09:INFO] Sniff delimiter as ',' [2020-03-13:07:04:09:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:10:INFO] Sniff delimiter as ',' [2020-03-13:07:04:10:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:10:INFO] Sniff delimiter as ',' [2020-03-13:07:04:10:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:10:INFO] Sniff delimiter as ',' [2020-03-13:07:04:10:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:10:INFO] Sniff delimiter as ',' [2020-03-13:07:04:10:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:11:INFO] Sniff delimiter as ',' [2020-03-13:07:04:11:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:11:INFO] Sniff delimiter as ',' [2020-03-13:07:04:11:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:11:INFO] Sniff delimiter as ',' [2020-03-13:07:04:11:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:11:INFO] Sniff delimiter as ',' [2020-03-13:07:04:11:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:13:INFO] Sniff delimiter as ',' [2020-03-13:07:04:13:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:13:INFO] Sniff delimiter as ',' [2020-03-13:07:04:13:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:13:INFO] Sniff delimiter as ',' [2020-03-13:07:04:13:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:13:INFO] Sniff delimiter as ',' [2020-03-13:07:04:13:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:13:INFO] Sniff delimiter as ',' [2020-03-13:07:04:13:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:13:INFO] Sniff delimiter as ',' [2020-03-13:07:04:13:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:14:INFO] Sniff delimiter as ',' [2020-03-13:07:04:14:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:14:INFO] Sniff delimiter as ',' [2020-03-13:07:04:14:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:15:INFO] Sniff delimiter as ',' [2020-03-13:07:04:15:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:15:INFO] Sniff delimiter as ',' [2020-03-13:07:04:15:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:15:INFO] Sniff delimiter as ',' [2020-03-13:07:04:15:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:15:INFO] Sniff delimiter as ',' [2020-03-13:07:04:15:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:16:INFO] Sniff delimiter as ',' [2020-03-13:07:04:16:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:16:INFO] Sniff delimiter as ',' [2020-03-13:07:04:16:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:16:INFO] Sniff delimiter as ',' [2020-03-13:07:04:16:INFO] Determined delimiter of CSV input is ',' [2020-03-13:07:04:16:INFO] Sniff delimiter as ',' [2020-03-13:07:04:16:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-788544388985/xgboost-2020-03-13-06-59-41-089/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = 'sentiment-update-xgboost-endpoint-config' + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }] ) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ---------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-10-21 19:55:57-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 25.3MB/s in 3.6s 2020-10-21 19:56:00 (22.2 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer # from sklearn.externals import joblib import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-10-21 19:57:43 Starting - Starting the training job... 2020-10-21 19:57:45 Starting - Launching requested ML instances...... 2020-10-21 19:58:58 Starting - Preparing the instances for training...... 2020-10-21 19:59:50 Downloading - Downloading input data... 2020-10-21 20:00:44 Training - Training image download completed. Training in progress..Arguments: train [2020-10-21:20:00:45:INFO] Running standalone xgboost training. [2020-10-21:20:00:45:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8478.67mb [2020-10-21:20:00:45:INFO] Determined delimiter of CSV input is ',' [20:00:45] S3DistributionType set as FullyReplicated [20:00:47] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-10-21:20:00:47:INFO] Determined delimiter of CSV input is ',' [20:00:47] S3DistributionType set as FullyReplicated [20:00:48] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [20:00:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [0]#011train-error:0.3012#011validation-error:0.2965 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [20:00:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [1]#011train-error:0.284067#011validation-error:0.2797 [20:00:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.2812#011validation-error:0.2777 [20:00:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 0 pruned nodes, max_depth=5 [3]#011train-error:0.264133#011validation-error:0.262 [20:00:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [4]#011train-error:0.2608#011validation-error:0.2595 [20:00:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.251867#011validation-error:0.2527 [20:00:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [6]#011train-error:0.250533#011validation-error:0.253 [20:01:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [7]#011train-error:0.243933#011validation-error:0.2457 [20:01:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [8]#011train-error:0.234133#011validation-error:0.2365 [20:01:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [9]#011train-error:0.2266#011validation-error:0.2313 [20:01:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.2208#011validation-error:0.2252 [20:01:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [11]#011train-error:0.216333#011validation-error:0.2209 [20:01:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [12]#011train-error:0.211667#011validation-error:0.2184 [20:01:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.2084#011validation-error:0.2138 [20:01:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [14]#011train-error:0.2052#011validation-error:0.2115 [20:01:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.202267#011validation-error:0.2086 [20:01:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [16]#011train-error:0.197733#011validation-error:0.2052 [20:01:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [17]#011train-error:0.1958#011validation-error:0.2051 [20:01:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [18]#011train-error:0.193067#011validation-error:0.2043 [20:01:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [19]#011train-error:0.1898#011validation-error:0.2029 [20:01:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 18 pruned nodes, max_depth=5 [20]#011train-error:0.186067#011validation-error:0.2004 [20:01:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [21]#011train-error:0.183733#011validation-error:0.1978 [20:01:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [22]#011train-error:0.180267#011validation-error:0.1964 [20:01:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.1794#011validation-error:0.1947 [20:01:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.1774#011validation-error:0.1926 [20:01:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [25]#011train-error:0.176133#011validation-error:0.1902 [20:01:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [26]#011train-error:0.1732#011validation-error:0.1906 [20:01:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [27]#011train-error:0.171867#011validation-error:0.1873 [20:01:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [28]#011train-error:0.169133#011validation-error:0.1867 [20:01:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [29]#011train-error:0.1676#011validation-error:0.1851 [20:01:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [30]#011train-error:0.165533#011validation-error:0.1835 [20:01:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.163067#011validation-error:0.1824 [20:01:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [32]#011train-error:0.162733#011validation-error:0.1823 [20:01:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5 [33]#011train-error:0.162133#011validation-error:0.1817 [20:01:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.161267#011validation-error:0.1816 [20:01:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [35]#011train-error:0.160067#011validation-error:0.1807 [20:01:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.159067#011validation-error:0.1798 [20:01:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [37]#011train-error:0.156667#011validation-error:0.1797 [20:01:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [38]#011train-error:0.155133#011validation-error:0.1793 [20:01:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [39]#011train-error:0.153533#011validation-error:0.1788 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .................................Arguments: serve Arguments: serve [2020-10-21 20:10:15 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-10-21 20:10:15 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-10-21 20:10:15 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-10-21 20:10:15 +0000] [1] [INFO] Using worker: gevent [2020-10-21 20:10:15 +0000] [36] [INFO] Booting worker with pid: 36 [2020-10-21 20:10:15 +0000] [37] [INFO] Booting worker with pid: 37 [2020-10-21:20:10:15:INFO] Model loaded successfully for worker : 36 [2020-10-21 20:10:15 +0000] [38] [INFO] Booting worker with pid: 38 [2020-10-21 20:10:15 +0000] [39] [INFO] Booting worker with pid: 39 [2020-10-21:20:10:15:INFO] Model loaded successfully for worker : 37 [2020-10-21:20:10:16:INFO] Model loaded successfully for worker : 38 [2020-10-21 20:10:15 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-10-21 20:10:15 +0000] [1] [INFO] Using worker: gevent [2020-10-21 20:10:15 +0000] [36] [INFO] Booting worker with pid: 36 [2020-10-21 20:10:15 +0000] [37] [INFO] Booting worker with pid: 37 [2020-10-21:20:10:15:INFO] Model loaded successfully for worker : 36 [2020-10-21 20:10:15 +0000] [38] [INFO] Booting worker with pid: 38 [2020-10-21 20:10:15 +0000] [39] [INFO] Booting worker with pid: 39 [2020-10-21:20:10:15:INFO] Model loaded successfully for worker : 37 [2020-10-21:20:10:16:INFO] Model loaded successfully for worker : 38 [2020-10-21:20:10:16:INFO] Model loaded successfully for worker : 39 [2020-10-21:20:10:16:INFO] Sniff delimiter as ',' [2020-10-21:20:10:16:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:16:INFO] Sniff delimiter as ',' [2020-10-21:20:10:16:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:16:INFO] Sniff delimiter as ',' [2020-10-21:20:10:16:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:16:INFO] Sniff delimiter as ',' [2020-10-21:20:10:16:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:16:INFO] Model loaded successfully for worker : 39 [2020-10-21:20:10:16:INFO] Sniff delimiter as ',' [2020-10-21:20:10:16:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:16:INFO] Sniff delimiter as ',' [2020-10-21:20:10:16:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:16:INFO] Sniff delimiter as ',' [2020-10-21:20:10:16:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:16:INFO] Sniff delimiter as ',' [2020-10-21:20:10:16:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:18:INFO] Sniff delimiter as ',' [2020-10-21:20:10:18:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:18:INFO] Sniff delimiter as ',' [2020-10-21:20:10:18:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:18:INFO] Sniff delimiter as ',' [2020-10-21:20:10:18:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:18:INFO] Sniff delimiter as ',' [2020-10-21:20:10:18:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:18:INFO] Sniff delimiter as ',' [2020-10-21:20:10:18:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:18:INFO] Sniff delimiter as ',' [2020-10-21:20:10:18:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:19:INFO] Sniff delimiter as ',' [2020-10-21:20:10:19:INFO] Sniff delimiter as ',' [2020-10-21:20:10:19:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:19:INFO] Determined delimiter of CSV input is ',' 2020-10-21T20:10:15.916:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-10-21:20:10:21:INFO] Sniff delimiter as ',' [2020-10-21:20:10:21:INFO] Sniff delimiter as ',' [2020-10-21:20:10:21:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:21:INFO] Sniff delimiter as ',' [2020-10-21:20:10:21:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:21:INFO] Sniff delimiter as ',' [2020-10-21:20:10:21:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:21:INFO] Sniff delimiter as ',' [2020-10-21:20:10:21:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:21:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:21:INFO] Sniff delimiter as ',' [2020-10-21:20:10:21:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:21:INFO] Sniff delimiter as ',' [2020-10-21:20:10:21:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:21:INFO] Sniff delimiter as ',' [2020-10-21:20:10:21:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:23:INFO] Sniff delimiter as ',' [2020-10-21:20:10:23:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:23:INFO] Sniff delimiter as ',' [2020-10-21:20:10:23:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:23:INFO] Sniff delimiter as ',' [2020-10-21:20:10:23:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:23:INFO] Sniff delimiter as ',' [2020-10-21:20:10:23:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:23:INFO] Sniff delimiter as ',' [2020-10-21:20:10:23:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:23:INFO] Sniff delimiter as ',' [2020-10-21:20:10:23:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:26:INFO] Sniff delimiter as ',' [2020-10-21:20:10:26:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:26:INFO] Sniff delimiter as ',' [2020-10-21:20:10:26:INFO] Sniff delimiter as ',' [2020-10-21:20:10:26:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:26:INFO] Sniff delimiter as ',' [2020-10-21:20:10:26:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:26:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:28:INFO] Sniff delimiter as ',' [2020-10-21:20:10:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:28:INFO] Sniff delimiter as ',' [2020-10-21:20:10:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:28:INFO] Sniff delimiter as ',' [2020-10-21:20:10:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:28:INFO] Sniff delimiter as ',' [2020-10-21:20:10:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:28:INFO] Sniff delimiter as ',' [2020-10-21:20:10:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:28:INFO] Sniff delimiter as ',' [2020-10-21:20:10:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:28:INFO] Sniff delimiter as ',' [2020-10-21:20:10:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:28:INFO] Sniff delimiter as ',' [2020-10-21:20:10:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:31:INFO] Sniff delimiter as ',' [2020-10-21:20:10:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:31:INFO] Sniff delimiter as ',' [2020-10-21:20:10:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:31:INFO] Sniff delimiter as ',' [2020-10-21:20:10:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:31:INFO] Sniff delimiter as ',' [2020-10-21:20:10:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:31:INFO] Sniff delimiter as ',' [2020-10-21:20:10:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:31:INFO] Sniff delimiter as ',' [2020-10-21:20:10:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:31:INFO] Sniff delimiter as ',' [2020-10-21:20:10:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:31:INFO] Sniff delimiter as ',' [2020-10-21:20:10:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:33:INFO] Sniff delimiter as ',' [2020-10-21:20:10:33:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:33:INFO] Sniff delimiter as ',' [2020-10-21:20:10:33:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:33:INFO] Sniff delimiter as ',' [2020-10-21:20:10:33:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:33:INFO] Sniff delimiter as ',' [2020-10-21:20:10:33:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:33:INFO] Sniff delimiter as ',' [2020-10-21:20:10:33:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:33:INFO] Sniff delimiter as ',' [2020-10-21:20:10:33:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:33:INFO] Sniff delimiter as ',' [2020-10-21:20:10:33:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:10:33:INFO] Sniff delimiter as ',' [2020-10-21:20:10:33:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.3 KiB (2.8 MiB/s) with 1 file(s) remaining Completed 370.3 KiB/370.3 KiB (3.9 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-501357048875/xgboost-2020-10-21-20-04-57-840/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv # This is our local data directory. We need to make sure that it exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ..............................2020-10-21T20:16:28.572:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-10-21 20:16:28 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-10-21 20:16:28 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-10-21 20:16:28 +0000] [1] [INFO] Using worker: gevent [2020-10-21 20:16:28 +0000] [37] [INFO] Booting worker with pid: 37 [2020-10-21 20:16:28 +0000] [38] [INFO] Booting worker with pid: 38 [2020-10-21 20:16:28 +0000] [39] [INFO] Booting worker with pid: 39 [2020-10-21 20:16:28 +0000] [40] [INFO] Booting worker with pid: 40 [2020-10-21:20:16:28:INFO] Model loaded successfully for worker : 37 Arguments: serve [2020-10-21 20:16:28 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-10-21 20:16:28 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-10-21 20:16:28 +0000] [1] [INFO] Using worker: gevent [2020-10-21 20:16:28 +0000] [37] [INFO] Booting worker with pid: 37 [2020-10-21 20:16:28 +0000] [38] [INFO] Booting worker with pid: 38 [2020-10-21 20:16:28 +0000] [39] [INFO] Booting worker with pid: 39 [2020-10-21 20:16:28 +0000] [40] [INFO] Booting worker with pid: 40 [2020-10-21:20:16:28:INFO] Model loaded successfully for worker : 37 [2020-10-21:20:16:28:INFO] Model loaded successfully for worker : 38 [2020-10-21:20:16:28:INFO] Model loaded successfully for worker : 39 [2020-10-21:20:16:28:INFO] Model loaded successfully for worker : 40 [2020-10-21:20:16:28:INFO] Sniff delimiter as ',' [2020-10-21:20:16:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:28:INFO] Sniff delimiter as ',' [2020-10-21:20:16:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:29:INFO] Sniff delimiter as ',' [2020-10-21:20:16:29:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:29:INFO] Sniff delimiter as ',' [2020-10-21:20:16:29:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:28:INFO] Model loaded successfully for worker : 38 [2020-10-21:20:16:28:INFO] Model loaded successfully for worker : 39 [2020-10-21:20:16:28:INFO] Model loaded successfully for worker : 40 [2020-10-21:20:16:28:INFO] Sniff delimiter as ',' [2020-10-21:20:16:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:28:INFO] Sniff delimiter as ',' [2020-10-21:20:16:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:29:INFO] Sniff delimiter as ',' [2020-10-21:20:16:29:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:29:INFO] Sniff delimiter as ',' [2020-10-21:20:16:29:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:31:INFO] Sniff delimiter as ',' [2020-10-21:20:16:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:31:INFO] Sniff delimiter as ',' [2020-10-21:20:16:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:31:INFO] Sniff delimiter as ',' [2020-10-21:20:16:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:31:INFO] Sniff delimiter as ',' [2020-10-21:20:16:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:31:INFO] Sniff delimiter as ',' [2020-10-21:20:16:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:31:INFO] Sniff delimiter as ',' [2020-10-21:20:16:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:31:INFO] Sniff delimiter as ',' [2020-10-21:20:16:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:31:INFO] Sniff delimiter as ',' [2020-10-21:20:16:31:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:33:INFO] Sniff delimiter as ',' [2020-10-21:20:16:33:INFO] Sniff delimiter as ',' [2020-10-21:20:16:33:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:33:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:34:INFO] Sniff delimiter as ',' [2020-10-21:20:16:34:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:34:INFO] Sniff delimiter as ',' [2020-10-21:20:16:34:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:34:INFO] Sniff delimiter as ',' [2020-10-21:20:16:34:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:34:INFO] Sniff delimiter as ',' [2020-10-21:20:16:34:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:34:INFO] Sniff delimiter as ',' [2020-10-21:20:16:34:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:34:INFO] Sniff delimiter as ',' [2020-10-21:20:16:34:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:37:INFO] Sniff delimiter as ',' [2020-10-21:20:16:37:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:37:INFO] Sniff delimiter as ',' [2020-10-21:20:16:37:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:38:INFO] Sniff delimiter as ',' [2020-10-21:20:16:38:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:38:INFO] Sniff delimiter as ',' [2020-10-21:20:16:38:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:39:INFO] Sniff delimiter as ',' [2020-10-21:20:16:39:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:39:INFO] Sniff delimiter as ',' [2020-10-21:20:16:39:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:39:INFO] Sniff delimiter as ',' [2020-10-21:20:16:39:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:39:INFO] Sniff delimiter as ',' [2020-10-21:20:16:39:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:39:INFO] Sniff delimiter as ',' [2020-10-21:20:16:39:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:39:INFO] Sniff delimiter as ',' [2020-10-21:20:16:39:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:41:INFO] Sniff delimiter as ',' [2020-10-21:20:16:41:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:41:INFO] Sniff delimiter as ',' [2020-10-21:20:16:41:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:41:INFO] Sniff delimiter as ',' [2020-10-21:20:16:41:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:41:INFO] Sniff delimiter as ',' [2020-10-21:20:16:41:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:41:INFO] Sniff delimiter as ',' [2020-10-21:20:16:41:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:41:INFO] Sniff delimiter as ',' [2020-10-21:20:16:41:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:42:INFO] Sniff delimiter as ',' [2020-10-21:20:16:42:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:42:INFO] Sniff delimiter as ',' [2020-10-21:20:16:42:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:43:INFO] Sniff delimiter as ',' [2020-10-21:20:16:43:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:44:INFO] Sniff delimiter as ',' [2020-10-21:20:16:44:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:43:INFO] Sniff delimiter as ',' [2020-10-21:20:16:43:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:44:INFO] Sniff delimiter as ',' [2020-10-21:20:16:44:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:44:INFO] Sniff delimiter as ',' [2020-10-21:20:16:44:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:44:INFO] Sniff delimiter as ',' [2020-10-21:20:16:44:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:44:INFO] Sniff delimiter as ',' [2020-10-21:20:16:44:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:16:44:INFO] Sniff delimiter as ',' [2020-10-21:20:16:44:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.5 KiB (4.4 MiB/s) with 1 file(s) remaining Completed 370.5 KiB/370.5 KiB (6.2 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-501357048875/xgboost-2020-10-21-20-11-40-087/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2020-10-21-19-57-43-292 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['idea', 'potenti', 'movi', 'poorli', 'script', 'poorli', 'act', 'poorli', 'shot', 'poorli', 'edit', 'lot', 'product', 'flaw', 'exampl', 'dr', 'lane', 'daughter', 'never', 'age', 'despit', 'pass', 'year', 'wait', 'video', 'expect', 'much', 'banana'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:507: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'ghetto', 'reincarn', 'victorian', 'playboy', 'weari', '21st', 'spill'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'dubiou', 'optimist', 'orchestr', 'sophi', 'banana', 'masterson', 'omin'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. # new_data_location = None # new_val_location = None # new_train_location = None new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. # new_xgb = None new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-10-21 20:24:49 Starting - Starting the training job... 2020-10-21 20:24:51 Starting - Launching requested ML instances...... 2020-10-21 20:26:07 Starting - Preparing the instances for training...... 2020-10-21 20:27:06 Downloading - Downloading input data... 2020-10-21 20:27:37 Training - Downloading the training image..Arguments: train [2020-10-21:20:27:58:INFO] Running standalone xgboost training. [2020-10-21:20:27:58:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8497.61mb [2020-10-21:20:27:58:INFO] Determined delimiter of CSV input is ',' [20:27:58] S3DistributionType set as FullyReplicated [20:28:00] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-10-21:20:28:00:INFO] Determined delimiter of CSV input is ',' [20:28:00] S3DistributionType set as FullyReplicated [20:28:01] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, 2020-10-21 20:27:56 Training - Training image download completed. Training in progress.[20:28:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 10 pruned nodes, max_depth=5 [0]#011train-error:0.306533#011validation-error:0.3045 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [20:28:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.3106#011validation-error:0.3081 [20:28:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.2926#011validation-error:0.2889 [20:28:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [3]#011train-error:0.279667#011validation-error:0.2744 [20:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.272667#011validation-error:0.2725 [20:28:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [5]#011train-error:0.263667#011validation-error:0.2642 [20:28:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.252933#011validation-error:0.259 [20:28:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [7]#011train-error:0.252867#011validation-error:0.2581 [20:28:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.242667#011validation-error:0.248 [20:28:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [9]#011train-error:0.240133#011validation-error:0.246 [20:28:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [10]#011train-error:0.234533#011validation-error:0.2412 [20:28:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 12 pruned nodes, max_depth=5 [11]#011train-error:0.231067#011validation-error:0.2367 [20:28:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [12]#011train-error:0.225533#011validation-error:0.2316 [20:28:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [13]#011train-error:0.221733#011validation-error:0.2295 [20:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 0 pruned nodes, max_depth=5 [14]#011train-error:0.218067#011validation-error:0.2264 [20:28:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.211933#011validation-error:0.2241 [20:28:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.2106#011validation-error:0.2211 [20:28:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [17]#011train-error:0.205667#011validation-error:0.2184 [20:28:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.2028#011validation-error:0.2176 [20:28:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.199667#011validation-error:0.2149 [20:28:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [20]#011train-error:0.196333#011validation-error:0.2124 [20:28:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.194533#011validation-error:0.2101 [20:28:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5 [22]#011train-error:0.192133#011validation-error:0.2075 [20:28:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [23]#011train-error:0.190133#011validation-error:0.2065 [20:28:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [24]#011train-error:0.188267#011validation-error:0.2055 [20:28:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.186267#011validation-error:0.2045 [20:28:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [26]#011train-error:0.183533#011validation-error:0.203 [20:28:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [27]#011train-error:0.1824#011validation-error:0.2025 [20:28:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [28]#011train-error:0.180267#011validation-error:0.2004 [20:28:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [29]#011train-error:0.177867#011validation-error:0.1987 [20:28:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [30]#011train-error:0.176667#011validation-error:0.1979 [20:28:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.175667#011validation-error:0.1979 [20:28:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [32]#011train-error:0.173733#011validation-error:0.1972 [20:28:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [33]#011train-error:0.1718#011validation-error:0.1958 [20:28:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [34]#011train-error:0.1714#011validation-error:0.1939 [20:28:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5 [35]#011train-error:0.170933#011validation-error:0.1915 [20:28:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [36]#011train-error:0.1686#011validation-error:0.1916 [20:28:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5 [37]#011train-error:0.168133#011validation-error:0.1917 [20:28:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [38]#011train-error:0.1646#011validation-error:0.1887 [20:28:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [39]#011train-error:0.1632#011validation-error:0.1883 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model # new_xgb_transformer = None new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output .............................2020-10-21T20:35:17.582:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-10-21 20:35:17 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-10-21 20:35:17 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-10-21 20:35:17 +0000] [1] [INFO] Using worker: gevent [2020-10-21 20:35:17 +0000] [36] [INFO] Booting worker with pid: 36 [2020-10-21 20:35:17 +0000] [37] [INFO] Booting worker with pid: 37 [2020-10-21:20:35:17:INFO] Model loaded successfully for worker : 36 [2020-10-21 20:35:17 +0000] [38] [INFO] Booting worker with pid: 38 [2020-10-21:20:35:17:INFO] Model loaded successfully for worker : 37 Arguments: serve [2020-10-21 20:35:17 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-10-21 20:35:17 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-10-21 20:35:17 +0000] [1] [INFO] Using worker: gevent [2020-10-21 20:35:17 +0000] [36] [INFO] Booting worker with pid: 36 [2020-10-21 20:35:17 +0000] [37] [INFO] Booting worker with pid: 37 [2020-10-21:20:35:17:INFO] Model loaded successfully for worker : 36 [2020-10-21 20:35:17 +0000] [38] [INFO] Booting worker with pid: 38 [2020-10-21:20:35:17:INFO] Model loaded successfully for worker : 37 [2020-10-21 20:35:17 +0000] [39] [INFO] Booting worker with pid: 39 [2020-10-21:20:35:17:INFO] Model loaded successfully for worker : 38 [2020-10-21:20:35:17:INFO] Model loaded successfully for worker : 39 [2020-10-21:20:35:17:INFO] Sniff delimiter as ',' [2020-10-21:20:35:17:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:18:INFO] Sniff delimiter as ',' [2020-10-21:20:35:18:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:18:INFO] Sniff delimiter as ',' [2020-10-21:20:35:18:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:18:INFO] Sniff delimiter as ',' [2020-10-21:20:35:18:INFO] Determined delimiter of CSV input is ',' [2020-10-21 20:35:17 +0000] [39] [INFO] Booting worker with pid: 39 [2020-10-21:20:35:17:INFO] Model loaded successfully for worker : 38 [2020-10-21:20:35:17:INFO] Model loaded successfully for worker : 39 [2020-10-21:20:35:17:INFO] Sniff delimiter as ',' [2020-10-21:20:35:17:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:18:INFO] Sniff delimiter as ',' [2020-10-21:20:35:18:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:18:INFO] Sniff delimiter as ',' [2020-10-21:20:35:18:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:18:INFO] Sniff delimiter as ',' [2020-10-21:20:35:18:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:20:INFO] Sniff delimiter as ',' [2020-10-21:20:35:20:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:20:INFO] Sniff delimiter as ',' [2020-10-21:20:35:20:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:20:INFO] Sniff delimiter as ',' [2020-10-21:20:35:20:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:20:INFO] Sniff delimiter as ',' [2020-10-21:20:35:20:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:20:INFO] Sniff delimiter as ',' [2020-10-21:20:35:20:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:20:INFO] Sniff delimiter as ',' [2020-10-21:20:35:20:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:20:INFO] Sniff delimiter as ',' [2020-10-21:20:35:20:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:20:INFO] Sniff delimiter as ',' [2020-10-21:20:35:20:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:22:INFO] Sniff delimiter as ',' [2020-10-21:20:35:22:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:23:INFO] Sniff delimiter as ',' [2020-10-21:20:35:23:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:23:INFO] Sniff delimiter as ',' [2020-10-21:20:35:23:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:22:INFO] Sniff delimiter as ',' [2020-10-21:20:35:22:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:23:INFO] Sniff delimiter as ',' [2020-10-21:20:35:23:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:23:INFO] Sniff delimiter as ',' [2020-10-21:20:35:23:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:23:INFO] Sniff delimiter as ',' [2020-10-21:20:35:23:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:23:INFO] Sniff delimiter as ',' [2020-10-21:20:35:23:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:25:INFO] Sniff delimiter as ',' [2020-10-21:20:35:25:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:25:INFO] Sniff delimiter as ',' [2020-10-21:20:35:25:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:25:INFO] Sniff delimiter as ',' [2020-10-21:20:35:25:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:25:INFO] Sniff delimiter as ',' [2020-10-21:20:35:25:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:25:INFO] Sniff delimiter as ',' [2020-10-21:20:35:25:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:25:INFO] Sniff delimiter as ',' [2020-10-21:20:35:25:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:25:INFO] Sniff delimiter as ',' [2020-10-21:20:35:25:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:25:INFO] Sniff delimiter as ',' [2020-10-21:20:35:25:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:27:INFO] Sniff delimiter as ',' [2020-10-21:20:35:27:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:27:INFO] Sniff delimiter as ',' [2020-10-21:20:35:27:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:28:INFO] Sniff delimiter as ',' [2020-10-21:20:35:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:28:INFO] Sniff delimiter as ',' [2020-10-21:20:35:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:27:INFO] Sniff delimiter as ',' [2020-10-21:20:35:27:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:27:INFO] Sniff delimiter as ',' [2020-10-21:20:35:27:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:28:INFO] Sniff delimiter as ',' [2020-10-21:20:35:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:28:INFO] Sniff delimiter as ',' [2020-10-21:20:35:28:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:30:INFO] Sniff delimiter as ',' [2020-10-21:20:35:30:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:30:INFO] Sniff delimiter as ',' [2020-10-21:20:35:30:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:30:INFO] Sniff delimiter as ',' [2020-10-21:20:35:30:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:30:INFO] Sniff delimiter as ',' [2020-10-21:20:35:30:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:30:INFO] Sniff delimiter as ',' [2020-10-21:20:35:30:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:30:INFO] Sniff delimiter as ',' [2020-10-21:20:35:30:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:30:INFO] Sniff delimiter as ',' [2020-10-21:20:35:30:INFO] Determined delimiter of CSV input is ',' [2020-10-21:20:35:30:INFO] Sniff delimiter as ',' [2020-10-21:20:35:30:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.0 KiB (3.9 MiB/s) with 1 file(s) remaining Completed 366.0 KiB/366.0 KiB (5.5 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-501357048875/xgboost-2020-10-21-20-30-32-847/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. # test_X = None test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-06-24 14:23:00-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 9.66MB/s in 11s 2020-06-24 14:23:11 (7.00 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test read_from_cache = False if read_from_cache: train_X, test_X, train_y, test_y = preprocess_data(_, _, _, _) # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output _____no_output_____ ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary read_from_cache = False if read_from_cache: train_X, test_X, vocabulary = extract_BoW_features(_, _) # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-06-25 18:38:52 Starting - Starting the training job... 2020-06-25 18:38:54 Starting - Launching requested ML instances......... 2020-06-25 18:40:27 Starting - Preparing the instances for training... 2020-06-25 18:41:13 Downloading - Downloading input data... 2020-06-25 18:41:47 Training - Training image download completed. Training in progress..Arguments: train [2020-06-25:18:41:48:INFO] Running standalone xgboost training. [2020-06-25:18:41:48:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8461.85mb [2020-06-25:18:41:48:INFO] Determined delimiter of CSV input is ',' [18:41:48] S3DistributionType set as FullyReplicated [18:41:50] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-06-25:18:41:50:INFO] Determined delimiter of CSV input is ',' [18:41:50] S3DistributionType set as FullyReplicated [18:41:51] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [18:41:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.2962#011validation-error:0.3001 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [18:41:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.291933#011validation-error:0.2974 [18:41:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 0 pruned nodes, max_depth=5 [2]#011train-error:0.278#011validation-error:0.2828 [18:41:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.269267#011validation-error:0.2715 [18:42:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.2554#011validation-error:0.263 [18:42:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [5]#011train-error:0.252#011validation-error:0.2602 [18:42:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.242933#011validation-error:0.2536 [18:42:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.2374#011validation-error:0.2469 [18:42:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.230667#011validation-error:0.2436 [18:42:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.226133#011validation-error:0.2397 [18:42:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [10]#011train-error:0.220933#011validation-error:0.235 [18:42:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.216933#011validation-error:0.2311 [18:42:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [12]#011train-error:0.211733#011validation-error:0.2272 [18:42:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [13]#011train-error:0.207667#011validation-error:0.2257 [18:42:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5 [14]#011train-error:0.203133#011validation-error:0.2196 [18:42:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5 [15]#011train-error:0.200133#011validation-error:0.2169 [18:42:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [16]#011train-error:0.1968#011validation-error:0.2131 [18:42:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [17]#011train-error:0.192067#011validation-error:0.2088 [18:42:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [18]#011train-error:0.1912#011validation-error:0.2066 [18:42:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.186933#011validation-error:0.2044 [18:42:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 16 pruned nodes, max_depth=5 [20]#011train-error:0.182867#011validation-error:0.203 [18:42:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.179067#011validation-error:0.2009 [18:42:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [22]#011train-error:0.175867#011validation-error:0.2 [18:42:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [23]#011train-error:0.1756#011validation-error:0.1996 [18:42:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.173667#011validation-error:0.1969 [18:42:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.170933#011validation-error:0.1938 [18:42:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [26]#011train-error:0.169067#011validation-error:0.1935 [18:42:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [27]#011train-error:0.168#011validation-error:0.1928 [18:42:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [28]#011train-error:0.166067#011validation-error:0.1899 [18:42:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.165267#011validation-error:0.1896 [18:42:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.163267#011validation-error:0.1894 [18:42:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 18 pruned nodes, max_depth=5 [31]#011train-error:0.1616#011validation-error:0.1876 [18:42:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [32]#011train-error:0.1592#011validation-error:0.1855 [18:42:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.158667#011validation-error:0.1852 [18:42:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [34]#011train-error:0.158267#011validation-error:0.184 [18:42:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [35]#011train-error:0.157667#011validation-error:0.1855 [18:42:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.1564#011validation-error:0.1838 [18:42:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [37]#011train-error:0.154733#011validation-error:0.1844 [18:42:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [38]#011train-error:0.154133#011validation-error:0.1821 [18:42:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5 [39]#011train-error:0.152#011validation-error:0.1809 [18:42:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [40]#011train-error:0.150333#011validation-error:0.18 [18:42:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5 [41]#011train-error:0.149#011validation-error:0.1786 [18:42:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=5 [42]#011train-error:0.1488#011validation-error:0.1767 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ....................Arguments: serve [2020-06-25 18:48:09 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-06-25 18:48:09 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-06-25 18:48:09 +0000] [1] [INFO] Using worker: gevent [2020-06-25 18:48:09 +0000] [38] [INFO] Booting worker with pid: 38 [2020-06-25 18:48:09 +0000] [39] [INFO] Booting worker with pid: 39 [2020-06-25 18:48:09 +0000] [40] [INFO] Booting worker with pid: 40 [2020-06-25:18:48:09:INFO] Model loaded successfully for worker : 38 [2020-06-25 18:48:09 +0000] [41] [INFO] Booting worker with pid: 41 [2020-06-25:18:48:09:INFO] Model loaded successfully for worker : 39 [2020-06-25:18:48:09:INFO] Model loaded successfully for worker : 40 [2020-06-25:18:48:09:INFO] Model loaded successfully for worker : 41 2020-06-25T18:48:29.885:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-06-25:18:48:32:INFO] Sniff delimiter as ',' [2020-06-25:18:48:32:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:33:INFO] Sniff delimiter as ',' [2020-06-25:18:48:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:32:INFO] Sniff delimiter as ',' [2020-06-25:18:48:32:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:33:INFO] Sniff delimiter as ',' [2020-06-25:18:48:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:33:INFO] Sniff delimiter as ',' [2020-06-25:18:48:33:INFO] Sniff delimiter as ',' [2020-06-25:18:48:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:33:INFO] Sniff delimiter as ',' [2020-06-25:18:48:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:33:INFO] Sniff delimiter as ',' [2020-06-25:18:48:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:35:INFO] Sniff delimiter as ',' [2020-06-25:18:48:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:35:INFO] Sniff delimiter as ',' [2020-06-25:18:48:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:35:INFO] Sniff delimiter as ',' [2020-06-25:18:48:35:INFO] Sniff delimiter as ',' [2020-06-25:18:48:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:35:INFO] Sniff delimiter as ',' [2020-06-25:18:48:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:35:INFO] Sniff delimiter as ',' [2020-06-25:18:48:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:35:INFO] Sniff delimiter as ',' [2020-06-25:18:48:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:35:INFO] Sniff delimiter as ',' [2020-06-25:18:48:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:37:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:37:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:38:INFO] Sniff delimiter as ',' [2020-06-25:18:48:38:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:38:INFO] Sniff delimiter as ',' [2020-06-25:18:48:38:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:38:INFO] Sniff delimiter as ',' [2020-06-25:18:48:38:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:38:INFO] Sniff delimiter as ',' [2020-06-25:18:48:38:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:40:INFO] Sniff delimiter as ',' [2020-06-25:18:48:40:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:40:INFO] Sniff delimiter as ',' [2020-06-25:18:48:40:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:40:INFO] Sniff delimiter as ',' [2020-06-25:18:48:40:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:40:INFO] Sniff delimiter as ',' [2020-06-25:18:48:40:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:40:INFO] Sniff delimiter as ',' [2020-06-25:18:48:40:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:40:INFO] Sniff delimiter as ',' [2020-06-25:18:48:40:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:40:INFO] Sniff delimiter as ',' [2020-06-25:18:48:40:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:40:INFO] Sniff delimiter as ',' [2020-06-25:18:48:40:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:42:INFO] Sniff delimiter as ',' [2020-06-25:18:48:42:INFO] Sniff delimiter as ',' [2020-06-25:18:48:42:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:42:INFO] Sniff delimiter as ',' [2020-06-25:18:48:42:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:42:INFO] Sniff delimiter as ',' [2020-06-25:18:48:42:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:43:INFO] Sniff delimiter as ',' [2020-06-25:18:48:43:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:42:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:42:INFO] Sniff delimiter as ',' [2020-06-25:18:48:42:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:42:INFO] Sniff delimiter as ',' [2020-06-25:18:48:42:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:43:INFO] Sniff delimiter as ',' [2020-06-25:18:48:43:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:45:INFO] Sniff delimiter as ',' [2020-06-25:18:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:45:INFO] Sniff delimiter as ',' [2020-06-25:18:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:45:INFO] Sniff delimiter as ',' [2020-06-25:18:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:45:INFO] Sniff delimiter as ',' [2020-06-25:18:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:45:INFO] Sniff delimiter as ',' [2020-06-25:18:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:45:INFO] Sniff delimiter as ',' [2020-06-25:18:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:45:INFO] Sniff delimiter as ',' [2020-06-25:18:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:45:INFO] Sniff delimiter as ',' [2020-06-25:18:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:49:INFO] Sniff delimiter as ',' [2020-06-25:18:48:49:INFO] Sniff delimiter as ',' [2020-06-25:18:48:49:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:49:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:50:INFO] Sniff delimiter as ',' [2020-06-25:18:48:50:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:50:INFO] Sniff delimiter as ',' [2020-06-25:18:48:50:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:50:INFO] Sniff delimiter as ',' [2020-06-25:18:48:50:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:50:INFO] Sniff delimiter as ',' [2020-06-25:18:48:50:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:50:INFO] Sniff delimiter as ',' [2020-06-25:18:48:50:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:50:INFO] Sniff delimiter as ',' [2020-06-25:18:48:50:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:52:INFO] Sniff delimiter as ',' [2020-06-25:18:48:52:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:52:INFO] Sniff delimiter as ',' [2020-06-25:18:48:52:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:52:INFO] Sniff delimiter as ',' [2020-06-25:18:48:52:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:52:INFO] Sniff delimiter as ',' [2020-06-25:18:48:52:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:52:INFO] Sniff delimiter as ',' [2020-06-25:18:48:52:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:52:INFO] Sniff delimiter as ',' [2020-06-25:18:48:52:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:52:INFO] Sniff delimiter as ',' [2020-06-25:18:48:52:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:52:INFO] Sniff delimiter as ',' [2020-06-25:18:48:52:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:54:INFO] Sniff delimiter as ',' [2020-06-25:18:48:54:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:54:INFO] Sniff delimiter as ',' [2020-06-25:18:48:54:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:54:INFO] Sniff delimiter as ',' [2020-06-25:18:48:54:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:54:INFO] Sniff delimiter as ',' [2020-06-25:18:48:54:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:55:INFO] Sniff delimiter as ',' [2020-06-25:18:48:55:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:55:INFO] Sniff delimiter as ',' [2020-06-25:18:48:55:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:55:INFO] Sniff delimiter as ',' [2020-06-25:18:48:55:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:48:55:INFO] Sniff delimiter as ',' [2020-06-25:18:48:55:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.3 KiB (4.2 MiB/s) with 1 file(s) remaining Completed 369.3 KiB/369.3 KiB (6.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-central-1-245452871727/xgboost-2020-06-25-18-45-07-290/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # vectorizer.fit(vocabulary) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir # with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ...................Arguments: serve [2020-06-25 18:54:49 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-06-25 18:54:49 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-06-25 18:54:49 +0000] [1] [INFO] Using worker: gevent [2020-06-25 18:54:49 +0000] [38] [INFO] Booting worker with pid: 38 [2020-06-25 18:54:49 +0000] [39] [INFO] Booting worker with pid: 39 [2020-06-25 18:54:49 +0000] [40] [INFO] Booting worker with pid: 40 [2020-06-25:18:54:49:INFO] Model loaded successfully for worker : 38 [2020-06-25:18:54:49:INFO] Model loaded successfully for worker : 39 [2020-06-25 18:54:49 +0000] [41] [INFO] Booting worker with pid: 41 [2020-06-25:18:54:49:INFO] Model loaded successfully for worker : 40 [2020-06-25:18:54:49:INFO] Model loaded successfully for worker : 41 [2020-06-25:18:55:18:INFO] Sniff delimiter as ',' [2020-06-25:18:55:18:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:18:INFO] Sniff delimiter as ',' [2020-06-25:18:55:18:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:18:INFO] Sniff delimiter as ',' [2020-06-25:18:55:18:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:18:INFO] Sniff delimiter as ',' [2020-06-25:18:55:18:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:18:INFO] Sniff delimiter as ',' [2020-06-25:18:55:18:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:18:INFO] Sniff delimiter as ',' [2020-06-25:18:55:18:INFO] Determined delimiter of CSV input is ',' 2020-06-25T18:55:16.027:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-06-25:18:55:19:INFO] Sniff delimiter as ',' [2020-06-25:18:55:19:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:19:INFO] Sniff delimiter as ',' [2020-06-25:18:55:19:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:21:INFO] Sniff delimiter as ',' [2020-06-25:18:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:21:INFO] Sniff delimiter as ',' [2020-06-25:18:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:21:INFO] Sniff delimiter as ',' [2020-06-25:18:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:21:INFO] Sniff delimiter as ',' [2020-06-25:18:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:21:INFO] Sniff delimiter as ',' [2020-06-25:18:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:21:INFO] Sniff delimiter as ',' [2020-06-25:18:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:21:INFO] Sniff delimiter as ',' [2020-06-25:18:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:21:INFO] Sniff delimiter as ',' [2020-06-25:18:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:23:INFO] Sniff delimiter as ',' [2020-06-25:18:55:23:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:23:INFO] Sniff delimiter as ',' [2020-06-25:18:55:23:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:23:INFO] Sniff delimiter as ',' [2020-06-25:18:55:23:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:23:INFO] Sniff delimiter as ',' [2020-06-25:18:55:23:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:23:INFO] Sniff delimiter as ',' [2020-06-25:18:55:23:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:23:INFO] Sniff delimiter as ',' [2020-06-25:18:55:23:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:23:INFO] Sniff delimiter as ',' [2020-06-25:18:55:23:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:23:INFO] Sniff delimiter as ',' [2020-06-25:18:55:23:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:25:INFO] Sniff delimiter as ',' [2020-06-25:18:55:25:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:25:INFO] Sniff delimiter as ',' [2020-06-25:18:55:25:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:26:INFO] Sniff delimiter as ',' [2020-06-25:18:55:26:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:26:INFO] Sniff delimiter as ',' [2020-06-25:18:55:26:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:26:INFO] Sniff delimiter as ',' [2020-06-25:18:55:26:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:26:INFO] Sniff delimiter as ',' [2020-06-25:18:55:26:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:26:INFO] Sniff delimiter as ',' [2020-06-25:18:55:26:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:26:INFO] Sniff delimiter as ',' [2020-06-25:18:55:26:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:28:INFO] Sniff delimiter as ',' [2020-06-25:18:55:28:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:28:INFO] Sniff delimiter as ',' [2020-06-25:18:55:28:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:30:INFO] Sniff delimiter as ',' [2020-06-25:18:55:30:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:30:INFO] Sniff delimiter as ',' [2020-06-25:18:55:30:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:30:INFO] Sniff delimiter as ',' [2020-06-25:18:55:30:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:30:INFO] Sniff delimiter as ',' [2020-06-25:18:55:30:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:30:INFO] Sniff delimiter as ',' [2020-06-25:18:55:30:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:30:INFO] Sniff delimiter as ',' [2020-06-25:18:55:30:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:31:INFO] Sniff delimiter as ',' [2020-06-25:18:55:31:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:31:INFO] Sniff delimiter as ',' [2020-06-25:18:55:31:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:33:INFO] Sniff delimiter as ',' [2020-06-25:18:55:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:33:INFO] Sniff delimiter as ',' [2020-06-25:18:55:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:33:INFO] Sniff delimiter as ',' [2020-06-25:18:55:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:33:INFO] Sniff delimiter as ',' [2020-06-25:18:55:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:33:INFO] Sniff delimiter as ',' [2020-06-25:18:55:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:33:INFO] Sniff delimiter as ',' [2020-06-25:18:55:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:33:INFO] Sniff delimiter as ',' [2020-06-25:18:55:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:33:INFO] Sniff delimiter as ',' [2020-06-25:18:55:33:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:35:INFO] Sniff delimiter as ',' [2020-06-25:18:55:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:35:INFO] Sniff delimiter as ',' [2020-06-25:18:55:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:35:INFO] Sniff delimiter as ',' [2020-06-25:18:55:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:35:INFO] Sniff delimiter as ',' [2020-06-25:18:55:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:35:INFO] Sniff delimiter as ',' [2020-06-25:18:55:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:35:INFO] Sniff delimiter as ',' [2020-06-25:18:55:35:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:36:INFO] Sniff delimiter as ',' [2020-06-25:18:55:36:INFO] Determined delimiter of CSV input is ',' [2020-06-25:18:55:36:INFO] Sniff delimiter as ',' [2020-06-25:18:55:36:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.4 KiB (4.0 MiB/s) with 1 file(s) remaining Completed 369.4 KiB/369.4 KiB (5.6 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-central-1-245452871727/xgboost-2020-06-25-18-51-49-376/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code # old score was 0.85424 accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-06-25-18-38-52-471 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['pictur', 'scene', 'bunch', 'scriptwrit', 'sit', 'around', 'tabl', 'one', 'say', 'let', 'black', 'woman', 'approach', 'unsuspect', 'member', 'public', 'also', 'black', 'street', 'ask', 'black', 'walk', 'away', 'writer', 'fall', 'laugh', 'hyster', 'one', 'suggest', 'repeat', 'everi', 'episod', 'laughter', 'think', 'premis', 'funni', 'show', 'contain', 'mani', 'type', 'situat', 'enjoy', 'show', 'rest', 'use', 'zapper', 'find', 'someth', 'entertain', 'like', 'watch', 'paint', 'dri', 'written', 'glow', 'report', 'show', 'either', 'get', 'forc', 'watch', 'televis', 'comedi', 'realli', 'funni', 'anoth', 'exampl', 'humor', 'show', 'girl', 'tri', 'get', 'pay', 'supermarket', 'checkout', 'tri', 'hypnotis', 'cashier', 'margin', 'funni', 'first', 'time', 'repeat', 'differ', 'show', 'differ', 'cashier', 'could', 'give', 'exampl', 'might', 'treat', 'spoiler', 'divulg', 'comedi', 'funni'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'victorian', 'playboy', 'reincarn', 'weari', '21st', 'ghetto', 'spill'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'banana', 'orchestr', 'sophi', 'masterson', 'optimist', 'omin', 'dubiou'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. ###Code cv = CountVectorizer(vocabulary = list(new_vocabulary - original_vocabulary), preprocessor=lambda x: x, tokenizer=lambda x: x) new_words_XV = cv.transform(new_X).toarray() print(cv.get_feature_names()) print(new_words_XV.sum(axis=0)) res = pd.DataFrame(new_Y, columns=['result']) df = pd.DataFrame(new_words_XV, columns=cv.get_feature_names()) df = pd.concat([res, df], axis=1) print(df[df['banana'] > 0].groupby('result').sum()) print(df.groupby('result').sum()) new_words_XV = None df_new_XV = None df = None ###Output ['banana', 'orchestr', 'sophi', 'masterson', 'optimist', 'omin', 'dubiou'] [5090 62 62 62 62 62 62] banana orchestr sophi masterson optimist omin dubiou result 0 2578 11 15 10 8 6 4 1 2512 6 3 2 3 5 12 banana orchestr sophi masterson optimist omin dubiou result 0 2578 29 34 15 27 30 29 1 2512 33 28 47 35 32 33 ###Markdown There is something weird about banana, but not sure what (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. # And then set the algorithm specific parameters. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-06-25 19:44:07 Starting - Starting the training job... 2020-06-25 19:44:09 Starting - Launching requested ML instances...... 2020-06-25 19:45:16 Starting - Preparing the instances for training...... 2020-06-25 19:46:30 Downloading - Downloading input data 2020-06-25 19:46:30 Training - Downloading the training image.. 2020-06-25 19:46:50 Training - Training image download completed. Training in progress.Arguments: train [2020-06-25:19:46:51:INFO] Running standalone xgboost training. [2020-06-25:19:46:51:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8456.28mb [2020-06-25:19:46:51:INFO] Determined delimiter of CSV input is ',' [19:46:51] S3DistributionType set as FullyReplicated [19:46:53] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-06-25:19:46:53:INFO] Determined delimiter of CSV input is ',' [19:46:53] S3DistributionType set as FullyReplicated [19:46:54] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [19:46:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [0]#011train-error:0.301#011validation-error:0.3017 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [19:46:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [1]#011train-error:0.2978#011validation-error:0.2978 [19:47:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [2]#011train-error:0.277067#011validation-error:0.2783 [19:47:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [3]#011train-error:0.275733#011validation-error:0.2797 [19:47:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [4]#011train-error:0.266133#011validation-error:0.2701 [19:47:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.264733#011validation-error:0.2679 [19:47:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 12 pruned nodes, max_depth=5 [6]#011train-error:0.257933#011validation-error:0.2638 [19:47:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.250067#011validation-error:0.2556 [19:47:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.236333#011validation-error:0.2468 [19:47:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 12 pruned nodes, max_depth=5 [9]#011train-error:0.231933#011validation-error:0.2432 [19:47:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [10]#011train-error:0.228533#011validation-error:0.2382 [19:47:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.224467#011validation-error:0.2348 [19:47:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 12 pruned nodes, max_depth=5 [12]#011train-error:0.218733#011validation-error:0.2289 [19:47:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [13]#011train-error:0.215133#011validation-error:0.2264 [19:47:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.2116#011validation-error:0.2227 [19:47:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [15]#011train-error:0.211267#011validation-error:0.2221 [19:47:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.207667#011validation-error:0.2181 [19:47:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [17]#011train-error:0.204933#011validation-error:0.2161 [19:47:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.2006#011validation-error:0.2135 [19:47:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [19]#011train-error:0.198267#011validation-error:0.2111 [19:47:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [20]#011train-error:0.196533#011validation-error:0.2114 [19:47:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.192667#011validation-error:0.2086 [19:47:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.190533#011validation-error:0.2071 [19:47:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.188267#011validation-error:0.205 [19:47:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [24]#011train-error:0.1864#011validation-error:0.2046 [19:47:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [25]#011train-error:0.185533#011validation-error:0.2024 [19:47:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [26]#011train-error:0.182133#011validation-error:0.2007 [19:47:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [27]#011train-error:0.1808#011validation-error:0.2001 [19:47:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [28]#011train-error:0.180133#011validation-error:0.1977 [19:47:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [29]#011train-error:0.1776#011validation-error:0.1969 [19:47:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.1748#011validation-error:0.1957 [19:47:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.174467#011validation-error:0.1969 [19:47:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.171733#011validation-error:0.197 [19:47:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.170533#011validation-error:0.1961 [19:47:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [34]#011train-error:0.169533#011validation-error:0.1946 [19:47:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [35]#011train-error:0.167933#011validation-error:0.1946 [19:47:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.167133#011validation-error:0.193 [19:47:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [37]#011train-error:0.1652#011validation-error:0.1923 [19:47:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [38]#011train-error:0.1634#011validation-error:0.1919 [19:47:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [39]#011train-error:0.163#011validation-error:0.1915 [19:47:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [40]#011train-error:0.161533#011validation-error:0.1911 [19:47:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 18 pruned nodes, max_depth=5 [41]#011train-error:0.1606#011validation-error:0.1883 [19:47:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 16 pruned nodes, max_depth=5 [42]#011train-error:0.159467#011validation-error:0.1864 [19:47:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [43]#011train-error:0.158533#011validation-error:0.1857 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None new_xgb_transformer = new_xgb.transformer(instance_count=1, instance_type='ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-06-25-19-44-06-973 ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ...................Arguments: serve [2020-06-25 19:55:00 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-06-25 19:55:00 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-06-25 19:55:00 +0000] [1] [INFO] Using worker: gevent [2020-06-25 19:55:00 +0000] [37] [INFO] Booting worker with pid: 37 [2020-06-25 19:55:00 +0000] [38] [INFO] Booting worker with pid: 38 [2020-06-25 19:55:00 +0000] [39] [INFO] Booting worker with pid: 39 [2020-06-25:19:55:00:INFO] Model loaded successfully for worker : 37 [2020-06-25 19:55:00 +0000] [40] [INFO] Booting worker with pid: 40 [2020-06-25:19:55:00:INFO] Model loaded successfully for worker : 38 Arguments: serve [2020-06-25 19:55:00 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-06-25 19:55:00 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-06-25 19:55:00 +0000] [1] [INFO] Using worker: gevent [2020-06-25 19:55:00 +0000] [37] [INFO] Booting worker with pid: 37 [2020-06-25 19:55:00 +0000] [38] [INFO] Booting worker with pid: 38 [2020-06-25 19:55:00 +0000] [39] [INFO] Booting worker with pid: 39 [2020-06-25:19:55:00:INFO] Model loaded successfully for worker : 37 [2020-06-25 19:55:00 +0000] [40] [INFO] Booting worker with pid: 40 [2020-06-25:19:55:00:INFO] Model loaded successfully for worker : 38 [2020-06-25:19:55:00:INFO] Model loaded successfully for worker : 39 [2020-06-25:19:55:00:INFO] Model loaded successfully for worker : 40 [2020-06-25:19:55:00:INFO] Model loaded successfully for worker : 39 [2020-06-25:19:55:00:INFO] Model loaded successfully for worker : 40 [2020-06-25:19:55:05:INFO] Sniff delimiter as ',' [2020-06-25:19:55:05:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:05:INFO] Sniff delimiter as ',' [2020-06-25:19:55:05:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:05:INFO] Sniff delimiter as ',' [2020-06-25:19:55:05:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:05:INFO] Sniff delimiter as ',' [2020-06-25:19:55:05:INFO] Determined delimiter of CSV input is ',' 2020-06-25T19:55:02.843:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-06-25:19:55:06:INFO] Sniff delimiter as ',' [2020-06-25:19:55:06:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:06:INFO] Sniff delimiter as ',' [2020-06-25:19:55:06:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:07:INFO] Sniff delimiter as ',' [2020-06-25:19:55:07:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:07:INFO] Sniff delimiter as ',' [2020-06-25:19:55:07:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:07:INFO] Sniff delimiter as ',' [2020-06-25:19:55:07:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:07:INFO] Sniff delimiter as ',' [2020-06-25:19:55:07:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:08:INFO] Sniff delimiter as ',' [2020-06-25:19:55:08:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:08:INFO] Sniff delimiter as ',' [2020-06-25:19:55:08:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:08:INFO] Sniff delimiter as ',' [2020-06-25:19:55:08:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:08:INFO] Sniff delimiter as ',' [2020-06-25:19:55:08:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:10:INFO] Sniff delimiter as ',' [2020-06-25:19:55:10:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:10:INFO] Sniff delimiter as ',' [2020-06-25:19:55:10:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:10:INFO] Sniff delimiter as ',' [2020-06-25:19:55:10:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:10:INFO] Sniff delimiter as ',' [2020-06-25:19:55:10:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:10:INFO] Sniff delimiter as ',' [2020-06-25:19:55:10:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:10:INFO] Sniff delimiter as ',' [2020-06-25:19:55:10:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:11:INFO] Sniff delimiter as ',' [2020-06-25:19:55:11:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:11:INFO] Sniff delimiter as ',' [2020-06-25:19:55:11:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:12:INFO] Sniff delimiter as ',' [2020-06-25:19:55:12:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:12:INFO] Sniff delimiter as ',' [2020-06-25:19:55:12:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:12:INFO] Sniff delimiter as ',' [2020-06-25:19:55:12:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:12:INFO] Sniff delimiter as ',' [2020-06-25:19:55:12:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:12:INFO] Sniff delimiter as ',' [2020-06-25:19:55:12:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:12:INFO] Sniff delimiter as ',' [2020-06-25:19:55:12:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:14:INFO] Sniff delimiter as ',' [2020-06-25:19:55:14:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:14:INFO] Sniff delimiter as ',' [2020-06-25:19:55:14:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:14:INFO] Sniff delimiter as ',' [2020-06-25:19:55:14:INFO] Sniff delimiter as ',' [2020-06-25:19:55:14:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:14:INFO] Sniff delimiter as ',' [2020-06-25:19:55:14:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:14:INFO] Sniff delimiter as ',' [2020-06-25:19:55:14:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:14:INFO] Sniff delimiter as ',' [2020-06-25:19:55:14:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:14:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:14:INFO] Sniff delimiter as ',' [2020-06-25:19:55:14:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:16:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:16:INFO] Sniff delimiter as ',' [2020-06-25:19:55:16:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:16:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:16:INFO] Sniff delimiter as ',' [2020-06-25:19:55:16:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:17:INFO] Sniff delimiter as ',' [2020-06-25:19:55:17:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:17:INFO] Sniff delimiter as ',' [2020-06-25:19:55:17:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:19:INFO] Sniff delimiter as ',' [2020-06-25:19:55:19:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:19:INFO] Sniff delimiter as ',' [2020-06-25:19:55:19:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:19:INFO] Sniff delimiter as ',' [2020-06-25:19:55:19:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:19:INFO] Sniff delimiter as ',' [2020-06-25:19:55:19:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:19:INFO] Sniff delimiter as ',' [2020-06-25:19:55:19:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:19:INFO] Sniff delimiter as ',' [2020-06-25:19:55:19:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:19:INFO] Sniff delimiter as ',' [2020-06-25:19:55:19:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:19:INFO] Sniff delimiter as ',' [2020-06-25:19:55:19:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:21:INFO] Sniff delimiter as ',' [2020-06-25:19:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:21:INFO] Sniff delimiter as ',' [2020-06-25:19:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:21:INFO] Sniff delimiter as ',' [2020-06-25:19:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:22:INFO] Sniff delimiter as ',' [2020-06-25:19:55:22:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:21:INFO] Sniff delimiter as ',' [2020-06-25:19:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:21:INFO] Sniff delimiter as ',' [2020-06-25:19:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:21:INFO] Sniff delimiter as ',' [2020-06-25:19:55:21:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:22:INFO] Sniff delimiter as ',' [2020-06-25:19:55:22:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:24:INFO] Sniff delimiter as ',' [2020-06-25:19:55:24:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:24:INFO] Sniff delimiter as ',' [2020-06-25:19:55:24:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:24:INFO] Sniff delimiter as ',' [2020-06-25:19:55:24:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:24:INFO] Sniff delimiter as ',' [2020-06-25:19:55:24:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:24:INFO] Sniff delimiter as ',' [2020-06-25:19:55:24:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:24:INFO] Sniff delimiter as ',' [2020-06-25:19:55:24:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:24:INFO] Sniff delimiter as ',' [2020-06-25:19:55:24:INFO] Determined delimiter of CSV input is ',' [2020-06-25:19:55:24:INFO] Sniff delimiter as ',' [2020-06-25:19:55:24:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.4 KiB (4.0 MiB/s) with 1 file(s) remaining Completed 366.4 KiB/366.4 KiB (5.6 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-central-1-245452871727/xgboost-2020-06-25-19-51-40-099/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(text_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output -------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. ###Code # Make sure that we use SageMaker 1.x !pip install sagemaker==1.72.0 ###Output Collecting sagemaker==1.72.0 Using cached sagemaker-1.72.0-py2.py3-none-any.whl Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1) Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.8) Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5) Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.14.0) Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5) Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.4.0) Requirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.16.63) Collecting smdebug-rulesconfig==0.1.4 Using cached smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB) Requirement already satisfied: botocore<1.20.0,>=1.19.63 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.63) Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.4) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0) Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.63->boto3>=1.14.12->sagemaker==1.72.0) (1.26.2) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.63->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1) Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3) Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0) Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7) Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0) Installing collected packages: smdebug-rulesconfig, sagemaker Attempting uninstall: smdebug-rulesconfig Found existing installation: smdebug-rulesconfig 1.0.1 Uninstalling smdebug-rulesconfig-1.0.1: Successfully uninstalled smdebug-rulesconfig-1.0.1 Attempting uninstall: sagemaker Found existing installation: sagemaker 2.24.1 Uninstalling sagemaker-2.24.1: Successfully uninstalled sagemaker-2.24.1 Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4 WARNING: You are using pip version 20.3.3; however, version 21.0.1 is available. You should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python -m pip install --upgrade pip' command. ###Markdown Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2021-02-11 03:13:24-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 27.0MB/s in 3.0s 2021-02-11 03:13:27 (27.0 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Wrote preprocessed data to cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2021-02-11 03:43:36 Starting - Starting the training job... 2021-02-11 03:43:39 Starting - Launching requested ML instances......... 2021-02-11 03:45:11 Starting - Preparing the instances for training...... 2021-02-11 03:46:24 Downloading - Downloading input data 2021-02-11 03:46:24 Training - Downloading the training image..Arguments: train [2021-02-11:03:46:46:INFO] Running standalone xgboost training. [2021-02-11:03:46:46:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8419.09mb [2021-02-11:03:46:46:INFO] Determined delimiter of CSV input is ',' [03:46:46] S3DistributionType set as FullyReplicated [03:46:48] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2021-02-11:03:46:48:INFO] Determined delimiter of CSV input is ',' [03:46:48] S3DistributionType set as FullyReplicated [03:46:49] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [03:46:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.298533#011validation-error:0.2937 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [03:46:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 2021-02-11 03:46:46 Training - Training image download completed. Training in progress.[1]#011train-error:0.2784#011validation-error:0.2754 [03:46:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [2]#011train-error:0.274467#011validation-error:0.2691 [03:46:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.272333#011validation-error:0.2675 [03:46:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.258067#011validation-error:0.2575 [03:47:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.253#011validation-error:0.2541 [03:47:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.240467#011validation-error:0.2445 [03:47:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [7]#011train-error:0.2396#011validation-error:0.2424 [03:47:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.231267#011validation-error:0.2362 [03:47:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.2256#011validation-error:0.2305 [03:47:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [10]#011train-error:0.219867#011validation-error:0.2255 [03:47:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [11]#011train-error:0.214867#011validation-error:0.2232 [03:47:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [12]#011train-error:0.210867#011validation-error:0.2207 [03:47:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [13]#011train-error:0.205867#011validation-error:0.216 [03:47:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [14]#011train-error:0.2008#011validation-error:0.2126 [03:47:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [15]#011train-error:0.196533#011validation-error:0.2107 [03:47:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.193#011validation-error:0.2078 [03:47:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [17]#011train-error:0.1928#011validation-error:0.2059 [03:47:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [18]#011train-error:0.1892#011validation-error:0.2034 [03:47:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.186533#011validation-error:0.2006 [03:47:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 14 pruned nodes, max_depth=5 [20]#011train-error:0.183933#011validation-error:0.1985 [03:47:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [21]#011train-error:0.181#011validation-error:0.1975 [03:47:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.178133#011validation-error:0.1963 [03:47:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [23]#011train-error:0.175933#011validation-error:0.1928 [03:47:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [24]#011train-error:0.1736#011validation-error:0.1902 [03:47:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5 [25]#011train-error:0.1714#011validation-error:0.1913 [03:47:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [26]#011train-error:0.170333#011validation-error:0.1882 [03:47:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [27]#011train-error:0.168333#011validation-error:0.1881 [03:47:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.167667#011validation-error:0.1874 [03:47:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [29]#011train-error:0.164267#011validation-error:0.1883 [03:47:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [30]#011train-error:0.163667#011validation-error:0.1871 [03:47:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [31]#011train-error:0.1622#011validation-error:0.1857 [03:47:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [32]#011train-error:0.1616#011validation-error:0.1844 [03:47:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [33]#011train-error:0.1594#011validation-error:0.1839 [03:47:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [34]#011train-error:0.1578#011validation-error:0.1827 [03:47:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 16 pruned nodes, max_depth=5 [35]#011train-error:0.155533#011validation-error:0.1815 [03:47:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.154267#011validation-error:0.179 [03:47:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [37]#011train-error:0.153467#011validation-error:0.1768 [03:47:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.152267#011validation-error:0.1765 [03:47:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [39]#011train-error:0.150467#011validation-error:0.1754 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .................................2021-02-11T03:56:28.200:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve Arguments: serve [2021-02-11 03:56:28 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-11 03:56:28 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-11 03:56:28 +0000] [1] [INFO] Using worker: gevent [2021-02-11 03:56:28 +0000] [36] [INFO] Booting worker with pid: 36 [2021-02-11 03:56:28 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-11 03:56:28 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-11:03:56:28:INFO] Model loaded successfully for worker : 36 [2021-02-11 03:56:28 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-11:03:56:28:INFO] Model loaded successfully for worker : 37 [2021-02-11:03:56:28:INFO] Model loaded successfully for worker : 38 [2021-02-11:03:56:28:INFO] Model loaded successfully for worker : 39 [2021-02-11:03:56:28:INFO] Sniff delimiter as ',' [2021-02-11:03:56:28:INFO] Determined delimiter of CSV input is ',' [2021-02-11 03:56:28 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-11 03:56:28 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-11 03:56:28 +0000] [1] [INFO] Using worker: gevent [2021-02-11 03:56:28 +0000] [36] [INFO] Booting worker with pid: 36 [2021-02-11 03:56:28 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-11 03:56:28 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-11:03:56:28:INFO] Model loaded successfully for worker : 36 [2021-02-11 03:56:28 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-11:03:56:28:INFO] Model loaded successfully for worker : 37 [2021-02-11:03:56:28:INFO] Model loaded successfully for worker : 38 [2021-02-11:03:56:28:INFO] Model loaded successfully for worker : 39 [2021-02-11:03:56:28:INFO] Sniff delimiter as ',' [2021-02-11:03:56:28:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:28:INFO] Sniff delimiter as ',' [2021-02-11:03:56:28:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:28:INFO] Sniff delimiter as ',' [2021-02-11:03:56:28:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:28:INFO] Sniff delimiter as ',' [2021-02-11:03:56:28:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:28:INFO] Sniff delimiter as ',' [2021-02-11:03:56:28:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:28:INFO] Sniff delimiter as ',' [2021-02-11:03:56:28:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:28:INFO] Sniff delimiter as ',' [2021-02-11:03:56:28:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:31:INFO] Sniff delimiter as ',' [2021-02-11:03:56:31:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:31:INFO] Sniff delimiter as ',' [2021-02-11:03:56:31:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:31:INFO] Sniff delimiter as ',' [2021-02-11:03:56:31:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:31:INFO] Sniff delimiter as ',' [2021-02-11:03:56:31:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:31:INFO] Sniff delimiter as ',' [2021-02-11:03:56:31:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:31:INFO] Sniff delimiter as ',' [2021-02-11:03:56:31:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:32:INFO] Sniff delimiter as ',' [2021-02-11:03:56:32:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:32:INFO] Sniff delimiter as ',' [2021-02-11:03:56:32:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:32:INFO] Sniff delimiter as ',' [2021-02-11:03:56:32:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:32:INFO] Sniff delimiter as ',' [2021-02-11:03:56:32:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:34:INFO] Sniff delimiter as ',' [2021-02-11:03:56:34:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:34:INFO] Sniff delimiter as ',' [2021-02-11:03:56:34:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:34:INFO] Sniff delimiter as ',' [2021-02-11:03:56:34:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:34:INFO] Sniff delimiter as ',' [2021-02-11:03:56:34:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:34:INFO] Sniff delimiter as ',' [2021-02-11:03:56:34:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:34:INFO] Sniff delimiter as ',' [2021-02-11:03:56:34:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:35:INFO] Sniff delimiter as ',' [2021-02-11:03:56:35:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:35:INFO] Sniff delimiter as ',' [2021-02-11:03:56:35:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:36:INFO] Sniff delimiter as ',' [2021-02-11:03:56:36:INFO] Sniff delimiter as ',' [2021-02-11:03:56:36:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:36:INFO] Sniff delimiter as ',' [2021-02-11:03:56:36:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:36:INFO] Sniff delimiter as ',' [2021-02-11:03:56:36:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:36:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:36:INFO] Sniff delimiter as ',' [2021-02-11:03:56:36:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:36:INFO] Sniff delimiter as ',' [2021-02-11:03:56:36:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:37:INFO] Sniff delimiter as ',' [2021-02-11:03:56:37:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:37:INFO] Sniff delimiter as ',' [2021-02-11:03:56:37:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:38:INFO] Sniff delimiter as ',' [2021-02-11:03:56:38:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:39:INFO] Sniff delimiter as ',' [2021-02-11:03:56:39:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:38:INFO] Sniff delimiter as ',' [2021-02-11:03:56:38:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:39:INFO] Sniff delimiter as ',' [2021-02-11:03:56:39:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:40:INFO] Sniff delimiter as ',' [2021-02-11:03:56:40:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:40:INFO] Sniff delimiter as ',' [2021-02-11:03:56:40:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:41:INFO] Sniff delimiter as ',' [2021-02-11:03:56:41:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:41:INFO] Sniff delimiter as ',' [2021-02-11:03:56:41:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:42:INFO] Sniff delimiter as ',' [2021-02-11:03:56:42:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:42:INFO] Sniff delimiter as ',' [2021-02-11:03:56:42:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:43:INFO] Sniff delimiter as ',' [2021-02-11:03:56:43:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:43:INFO] Sniff delimiter as ',' [2021-02-11:03:56:43:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:43:INFO] Sniff delimiter as ',' [2021-02-11:03:56:43:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:43:INFO] Sniff delimiter as ',' [2021-02-11:03:56:43:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:44:INFO] Sniff delimiter as ',' [2021-02-11:03:56:44:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:44:INFO] Sniff delimiter as ',' [2021-02-11:03:56:44:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:44:INFO] Sniff delimiter as ',' [2021-02-11:03:56:44:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:44:INFO] Sniff delimiter as ',' [2021-02-11:03:56:44:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:45:INFO] Sniff delimiter as ',' [2021-02-11:03:56:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:45:INFO] Sniff delimiter as ',' [2021-02-11:03:56:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:45:INFO] Sniff delimiter as ',' [2021-02-11:03:56:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:45:INFO] Sniff delimiter as ',' [2021-02-11:03:56:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:46:INFO] Sniff delimiter as ',' [2021-02-11:03:56:46:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:46:INFO] Sniff delimiter as ',' [2021-02-11:03:56:46:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:46:INFO] Sniff delimiter as ',' [2021-02-11:03:56:46:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:46:INFO] Sniff delimiter as ',' [2021-02-11:03:56:46:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:47:INFO] Sniff delimiter as ',' [2021-02-11:03:56:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:47:INFO] Sniff delimiter as ',' [2021-02-11:03:56:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:47:INFO] Sniff delimiter as ',' [2021-02-11:03:56:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:47:INFO] Sniff delimiter as ',' [2021-02-11:03:56:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:49:INFO] Sniff delimiter as ',' [2021-02-11:03:56:49:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:49:INFO] Sniff delimiter as ',' [2021-02-11:03:56:49:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:49:INFO] Sniff delimiter as ',' [2021-02-11:03:56:49:INFO] Determined delimiter of CSV input is ',' [2021-02-11:03:56:49:INFO] Sniff delimiter as ',' [2021-02-11:03:56:49:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.3 KiB (1.7 MiB/s) with 1 file(s) remaining Completed 369.3 KiB/369.3 KiB (2.5 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-522242990749/xgboost-2021-02-11-03-51-09-618/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ................................2021-02-11T04:04:45.286:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2021-02-11 04:04:45 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-11 04:04:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-11 04:04:45 +0000] [1] [INFO] Using worker: gevent [2021-02-11 04:04:45 +0000] [36] [INFO] Booting worker with pid: 36 [2021-02-11 04:04:45 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-11:04:04:45:INFO] Model loaded successfully for worker : 36 [2021-02-11 04:04:45 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-11:04:04:45:INFO] Model loaded successfully for worker : 37 Arguments: serve [2021-02-11 04:04:45 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-11 04:04:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-11 04:04:45 +0000] [1] [INFO] Using worker: gevent [2021-02-11 04:04:45 +0000] [36] [INFO] Booting worker with pid: 36 [2021-02-11 04:04:45 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-11:04:04:45:INFO] Model loaded successfully for worker : 36 [2021-02-11 04:04:45 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-11:04:04:45:INFO] Model loaded successfully for worker : 37 [2021-02-11 04:04:45 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-11:04:04:45:INFO] Model loaded successfully for worker : 38 [2021-02-11:04:04:45:INFO] Model loaded successfully for worker : 39 [2021-02-11:04:04:45:INFO] Sniff delimiter as ',' [2021-02-11:04:04:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:45:INFO] Sniff delimiter as ',' [2021-02-11:04:04:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:45:INFO] Sniff delimiter as ',' [2021-02-11:04:04:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:47:INFO] Sniff delimiter as ',' [2021-02-11:04:04:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11 04:04:45 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-11:04:04:45:INFO] Model loaded successfully for worker : 38 [2021-02-11:04:04:45:INFO] Model loaded successfully for worker : 39 [2021-02-11:04:04:45:INFO] Sniff delimiter as ',' [2021-02-11:04:04:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:45:INFO] Sniff delimiter as ',' [2021-02-11:04:04:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:45:INFO] Sniff delimiter as ',' [2021-02-11:04:04:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:47:INFO] Sniff delimiter as ',' [2021-02-11:04:04:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:47:INFO] Sniff delimiter as ',' [2021-02-11:04:04:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:47:INFO] Sniff delimiter as ',' [2021-02-11:04:04:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:47:INFO] Sniff delimiter as ',' [2021-02-11:04:04:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:47:INFO] Sniff delimiter as ',' [2021-02-11:04:04:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:48:INFO] Sniff delimiter as ',' [2021-02-11:04:04:48:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:48:INFO] Sniff delimiter as ',' [2021-02-11:04:04:48:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:49:INFO] Sniff delimiter as ',' [2021-02-11:04:04:49:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:50:INFO] Sniff delimiter as ',' [2021-02-11:04:04:50:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:50:INFO] Sniff delimiter as ',' [2021-02-11:04:04:49:INFO] Sniff delimiter as ',' [2021-02-11:04:04:49:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:50:INFO] Sniff delimiter as ',' [2021-02-11:04:04:50:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:50:INFO] Sniff delimiter as ',' [2021-02-11:04:04:50:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:50:INFO] Sniff delimiter as ',' [2021-02-11:04:04:50:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:50:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:50:INFO] Sniff delimiter as ',' [2021-02-11:04:04:50:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:52:INFO] Sniff delimiter as ',' [2021-02-11:04:04:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:52:INFO] Sniff delimiter as ',' [2021-02-11:04:04:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:52:INFO] Sniff delimiter as ',' [2021-02-11:04:04:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:52:INFO] Sniff delimiter as ',' [2021-02-11:04:04:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:52:INFO] Sniff delimiter as ',' [2021-02-11:04:04:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:53:INFO] Sniff delimiter as ',' [2021-02-11:04:04:53:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:52:INFO] Sniff delimiter as ',' [2021-02-11:04:04:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:53:INFO] Sniff delimiter as ',' [2021-02-11:04:04:53:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:54:INFO] Sniff delimiter as ',' [2021-02-11:04:04:54:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:55:INFO] Sniff delimiter as ',' [2021-02-11:04:04:55:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:55:INFO] Sniff delimiter as ',' [2021-02-11:04:04:55:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:55:INFO] Sniff delimiter as ',' [2021-02-11:04:04:55:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:54:INFO] Sniff delimiter as ',' [2021-02-11:04:04:54:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:55:INFO] Sniff delimiter as ',' [2021-02-11:04:04:55:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:55:INFO] Sniff delimiter as ',' [2021-02-11:04:04:55:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:55:INFO] Sniff delimiter as ',' [2021-02-11:04:04:55:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:57:INFO] Sniff delimiter as ',' [2021-02-11:04:04:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:57:INFO] Sniff delimiter as ',' [2021-02-11:04:04:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:57:INFO] Sniff delimiter as ',' [2021-02-11:04:04:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:57:INFO] Sniff delimiter as ',' [2021-02-11:04:04:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:57:INFO] Sniff delimiter as ',' [2021-02-11:04:04:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:57:INFO] Sniff delimiter as ',' [2021-02-11:04:04:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:57:INFO] Sniff delimiter as ',' [2021-02-11:04:04:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:57:INFO] Sniff delimiter as ',' [2021-02-11:04:04:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:59:INFO] Sniff delimiter as ',' [2021-02-11:04:04:59:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:59:INFO] Sniff delimiter as ',' [2021-02-11:04:04:59:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:05:00:INFO] Sniff delimiter as ',' [2021-02-11:04:04:59:INFO] Sniff delimiter as ',' [2021-02-11:04:04:59:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:04:59:INFO] Sniff delimiter as ',' [2021-02-11:04:04:59:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:05:00:INFO] Sniff delimiter as ',' [2021-02-11:04:05:00:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:05:00:INFO] Sniff delimiter as ',' [2021-02-11:04:05:00:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:05:00:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:05:00:INFO] Sniff delimiter as ',' [2021-02-11:04:05:00:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.5 KiB (1.5 MiB/s) with 1 file(s) remaining Completed 369.5 KiB/369.5 KiB (2.1 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-522242990749/xgboost-2021-02-11-03-59-31-220/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2021-02-11-03-43-36-696 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['rent', 'movi', 'tonight', 'look', 'like', 'fun', 'movi', 'figur', 'realli', 'go', 'wrong', 'concept', 'ex', 'girlfriend', 'super', 'power', 'movi', 'confus', 'pointless', 'seem', 'everi', 'turn', 'writer', 'kept', 'throw', 'junk', 'also', 'writer', 'kept', 'throw', 'way', 'much', 'toilet', 'humor', 'sexual', 'situat', 'teenag', 'boy', 'could', 'love', 'seem', 'could', 'simpl', 'draw', 'stori', 'fatal', 'attract', 'super', 'hero', 'guess', 'fun', 'romant', 'comedi', 'advertis', 'could', 'take', 'child', 'see', 'would', 'embarrass', 'see', 'date', 'writer', 'could', 'done', 'basic', 'stori', 'around', 'high', 'concept', 'clean', 'movi', 'might', 'fight', 'chanc', 'seriou', 'wast', 'time', 'b', 'banana'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:507: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'reincarn', '21st', 'victorian', 'spill', 'weari', 'playboy', 'ghetto'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'omin', 'sophi', 'optimist', 'orchestr', 'dubiou', 'masterson', 'banana'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2021-02-11 04:18:42 Starting - Starting the training job... 2021-02-11 04:18:43 Starting - Launching requested ML instances...... 2021-02-11 04:19:55 Starting - Preparing the instances for training...... 2021-02-11 04:21:08 Downloading - Downloading input data 2021-02-11 04:21:08 Training - Downloading the training image... 2021-02-11 04:21:38 Training - Training image download completed. Training in progress..Arguments: train [2021-02-11:04:21:39:INFO] Running standalone xgboost training. [2021-02-11:04:21:39:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8419.98mb [2021-02-11:04:21:39:INFO] Determined delimiter of CSV input is ',' [04:21:39] S3DistributionType set as FullyReplicated [04:21:40] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2021-02-11:04:21:40:INFO] Determined delimiter of CSV input is ',' [04:21:40] S3DistributionType set as FullyReplicated [04:21:42] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [04:21:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 2 pruned nodes, max_depth=5 [0]#011train-error:0.300067#011validation-error:0.2985 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [04:21:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [1]#011train-error:0.2984#011validation-error:0.296 [04:21:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [2]#011train-error:0.287467#011validation-error:0.2835 [04:21:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.2808#011validation-error:0.2796 [04:21:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.269867#011validation-error:0.2715 [04:21:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.258667#011validation-error:0.2636 [04:21:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 12 pruned nodes, max_depth=5 [6]#011train-error:0.2572#011validation-error:0.261 [04:21:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [7]#011train-error:0.255067#011validation-error:0.2572 [04:21:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.248#011validation-error:0.2533 [04:21:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [9]#011train-error:0.242867#011validation-error:0.2485 [04:21:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.2354#011validation-error:0.2406 [04:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.228667#011validation-error:0.2366 [04:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [12]#011train-error:0.226333#011validation-error:0.2323 [04:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [13]#011train-error:0.222067#011validation-error:0.2295 [04:22:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 2 pruned nodes, max_depth=5 [14]#011train-error:0.2152#011validation-error:0.2247 [04:22:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.213933#011validation-error:0.2217 [04:22:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.209267#011validation-error:0.2166 [04:22:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [17]#011train-error:0.205467#011validation-error:0.2144 [04:22:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [18]#011train-error:0.2034#011validation-error:0.2138 [04:22:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.202133#011validation-error:0.2129 [04:22:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [20]#011train-error:0.197867#011validation-error:0.2095 [04:22:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.196067#011validation-error:0.2091 [04:22:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 2 pruned nodes, max_depth=5 [22]#011train-error:0.192067#011validation-error:0.2041 [04:22:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.190333#011validation-error:0.2062 [04:22:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [24]#011train-error:0.188333#011validation-error:0.2019 [04:22:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [25]#011train-error:0.186267#011validation-error:0.2018 [04:22:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [26]#011train-error:0.1832#011validation-error:0.1998 [04:22:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [27]#011train-error:0.181333#011validation-error:0.2007 [04:22:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [28]#011train-error:0.1766#011validation-error:0.1973 [04:22:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [29]#011train-error:0.1762#011validation-error:0.197 [04:22:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [30]#011train-error:0.173867#011validation-error:0.1965 [04:22:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [31]#011train-error:0.173267#011validation-error:0.1943 [04:22:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [32]#011train-error:0.172533#011validation-error:0.1933 [04:22:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [33]#011train-error:0.170533#011validation-error:0.1929 [04:22:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.169333#011validation-error:0.191 [04:22:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.168333#011validation-error:0.1909 [04:22:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.167867#011validation-error:0.1913 [04:22:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.1668#011validation-error:0.1895 [04:22:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.165#011validation-error:0.1889 [04:22:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [39]#011train-error:0.1636#011validation-error:0.1893 [04:22:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [40]#011train-error:0.1642#011validation-error:0.1894 [04:22:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [41]#011train-error:0.1626#011validation-error:0.1876 [04:22:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [42]#011train-error:0.1612#011validation-error:0.1873 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output .................................2021-02-11T04:30:41.738:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2021-02-11 04:30:41 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-11 04:30:41 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-11 04:30:41 +0000] [1] [INFO] Using worker: gevent [2021-02-11 04:30:41 +0000] [36] [INFO] Booting worker with pid: 36 [2021-02-11 04:30:41 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-11:04:30:41:INFO] Model loaded successfully for worker : 36 Arguments: serve [2021-02-11 04:30:41 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-11 04:30:41 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-11 04:30:41 +0000] [1] [INFO] Using worker: gevent [2021-02-11 04:30:41 +0000] [36] [INFO] Booting worker with pid: 36 [2021-02-11 04:30:41 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-11:04:30:41:INFO] Model loaded successfully for worker : 36 [2021-02-11 04:30:41 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-11 04:30:41 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-11:04:30:41:INFO] Model loaded successfully for worker : 37 [2021-02-11:04:30:41:INFO] Model loaded successfully for worker : 38 [2021-02-11:04:30:41:INFO] Model loaded successfully for worker : 39 [2021-02-11:04:30:42:INFO] Sniff delimiter as ',' [2021-02-11:04:30:42:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:42:INFO] Sniff delimiter as ',' [2021-02-11:04:30:42:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:42:INFO] Sniff delimiter as ',' [2021-02-11:04:30:42:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:42:INFO] Sniff delimiter as ',' [2021-02-11:04:30:42:INFO] Determined delimiter of CSV input is ',' [2021-02-11 04:30:41 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-11 04:30:41 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-11:04:30:41:INFO] Model loaded successfully for worker : 37 [2021-02-11:04:30:41:INFO] Model loaded successfully for worker : 38 [2021-02-11:04:30:41:INFO] Model loaded successfully for worker : 39 [2021-02-11:04:30:42:INFO] Sniff delimiter as ',' [2021-02-11:04:30:42:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:42:INFO] Sniff delimiter as ',' [2021-02-11:04:30:42:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:42:INFO] Sniff delimiter as ',' [2021-02-11:04:30:42:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:42:INFO] Sniff delimiter as ',' [2021-02-11:04:30:42:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:44:INFO] Sniff delimiter as ',' [2021-02-11:04:30:44:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:44:INFO] Sniff delimiter as ',' [2021-02-11:04:30:44:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:45:INFO] Sniff delimiter as ',' [2021-02-11:04:30:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:44:INFO] Sniff delimiter as ',' [2021-02-11:04:30:44:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:44:INFO] Sniff delimiter as ',' [2021-02-11:04:30:44:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:45:INFO] Sniff delimiter as ',' [2021-02-11:04:30:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:45:INFO] Sniff delimiter as ',' [2021-02-11:04:30:45:INFO] Sniff delimiter as ',' [2021-02-11:04:30:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:45:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:47:INFO] Sniff delimiter as ',' [2021-02-11:04:30:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:47:INFO] Sniff delimiter as ',' [2021-02-11:04:30:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:47:INFO] Sniff delimiter as ',' [2021-02-11:04:30:47:INFO] Sniff delimiter as ',' [2021-02-11:04:30:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:47:INFO] Sniff delimiter as ',' [2021-02-11:04:30:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:47:INFO] Sniff delimiter as ',' [2021-02-11:04:30:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:47:INFO] Sniff delimiter as ',' [2021-02-11:04:30:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:47:INFO] Sniff delimiter as ',' [2021-02-11:04:30:47:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:49:INFO] Sniff delimiter as ',' [2021-02-11:04:30:49:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:49:INFO] Sniff delimiter as ',' [2021-02-11:04:30:49:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:50:INFO] Sniff delimiter as ',' [2021-02-11:04:30:50:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:49:INFO] Sniff delimiter as ',' [2021-02-11:04:30:49:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:49:INFO] Sniff delimiter as ',' [2021-02-11:04:30:49:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:50:INFO] Sniff delimiter as ',' [2021-02-11:04:30:50:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:50:INFO] Sniff delimiter as ',' [2021-02-11:04:30:50:INFO] Sniff delimiter as ',' [2021-02-11:04:30:50:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:50:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:52:INFO] Sniff delimiter as ',' [2021-02-11:04:30:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:52:INFO] Sniff delimiter as ',' [2021-02-11:04:30:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:52:INFO] Sniff delimiter as ',' [2021-02-11:04:30:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:52:INFO] Sniff delimiter as ',' [2021-02-11:04:30:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:52:INFO] Sniff delimiter as ',' [2021-02-11:04:30:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:52:INFO] Sniff delimiter as ',' [2021-02-11:04:30:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:52:INFO] Sniff delimiter as ',' [2021-02-11:04:30:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:52:INFO] Sniff delimiter as ',' [2021-02-11:04:30:52:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:55:INFO] Sniff delimiter as ',' [2021-02-11:04:30:55:INFO] Sniff delimiter as ',' [2021-02-11:04:30:55:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:55:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:56:INFO] Sniff delimiter as ',' [2021-02-11:04:30:56:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:57:INFO] Sniff delimiter as ',' [2021-02-11:04:30:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:56:INFO] Sniff delimiter as ',' [2021-02-11:04:30:56:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:57:INFO] Sniff delimiter as ',' [2021-02-11:04:30:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:57:INFO] Sniff delimiter as ',' [2021-02-11:04:30:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:57:INFO] Sniff delimiter as ',' [2021-02-11:04:30:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:57:INFO] Sniff delimiter as ',' [2021-02-11:04:30:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:57:INFO] Sniff delimiter as ',' [2021-02-11:04:30:57:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:59:INFO] Sniff delimiter as ',' [2021-02-11:04:30:59:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:59:INFO] Sniff delimiter as ',' [2021-02-11:04:30:59:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:59:INFO] Sniff delimiter as ',' [2021-02-11:04:30:59:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:59:INFO] Sniff delimiter as ',' [2021-02-11:04:30:59:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:59:INFO] Sniff delimiter as ',' [2021-02-11:04:30:59:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:30:59:INFO] Sniff delimiter as ',' [2021-02-11:04:30:59:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:31:00:INFO] Sniff delimiter as ',' [2021-02-11:04:31:00:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:31:00:INFO] Sniff delimiter as ',' [2021-02-11:04:31:00:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:31:01:INFO] Sniff delimiter as ',' [2021-02-11:04:31:01:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:31:01:INFO] Sniff delimiter as ',' [2021-02-11:04:31:01:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:31:01:INFO] Sniff delimiter as ',' [2021-02-11:04:31:01:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:31:01:INFO] Sniff delimiter as ',' [2021-02-11:04:31:01:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:31:02:INFO] Sniff delimiter as ',' [2021-02-11:04:31:02:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:31:02:INFO] Sniff delimiter as ',' [2021-02-11:04:31:02:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:31:02:INFO] Sniff delimiter as ',' [2021-02-11:04:31:02:INFO] Determined delimiter of CSV input is ',' [2021-02-11:04:31:02:INFO] Sniff delimiter as ',' [2021-02-11:04:31:02:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.2 KiB (2.7 MiB/s) with 1 file(s) remaining Completed 366.2 KiB/366.2 KiB (3.7 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-522242990749/xgboost-2021-02-11-04-25-26-531/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-07-09 00:57:10-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 44.8MB/s in 1.8s 2020-07-09 00:57:12 (44.8 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) vocabulary = len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import pandas as pd data_dir = '../data/sentiment_update' import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost', '1.0-1') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output WARNING:root:Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-07-09 04:40:29 Starting - Starting the training job... 2020-07-09 04:40:32 Starting - Launching requested ML instances...... 2020-07-09 04:41:40 Starting - Preparing the instances for training...... 2020-07-09 04:42:53 Downloading - Downloading input data... 2020-07-09 04:43:12 Training - Downloading the training image... 2020-07-09 04:43:50 Training - Training image download completed. Training in progress..INFO:sagemaker-containers:Imported framework sagemaker_xgboost_container.training INFO:sagemaker-containers:Failed to parse hyperparameter objective value binary:logistic to Json. Returning the value itself INFO:sagemaker-containers:No GPUs detected (normal if no gpus installed) INFO:sagemaker_xgboost_container.training:Running XGBoost Sagemaker in algorithm mode INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' [04:43:55] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, INFO:root:Determined delimiter of CSV input is ',' [04:43:57] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, INFO:root:Single node training. INFO:root:Train matrix has 15000 rows INFO:root:Validation matrix has 10000 rows [04:43:57] WARNING: /workspace/src/learner.cc:328:  Parameters: { early_stopping_rounds, num_round, silent } might not be used. This may not be accurate due to some parameters are only used in language bindings but passed down to XGBoost core. Or some parameters are not used but slip through this verification. Please open an issue if you find above cases.  [0]#011train-error:0.29140#011validation-error:0.30560 [1]#011train-error:0.27967#011validation-error:0.28570 [2]#011train-error:0.27833#011validation-error:0.28780 [3]#011train-error:0.26667#011validation-error:0.27620 [4]#011train-error:0.25933#011validation-error:0.27080 [5]#011train-error:0.24687#011validation-error:0.25880 [6]#011train-error:0.24053#011validation-error:0.25370 [7]#011train-error:0.23707#011validation-error:0.25000 [8]#011train-error:0.22973#011validation-error:0.24580 [9]#011train-error:0.22807#011validation-error:0.24560 [10]#011train-error:0.22280#011validation-error:0.24170 [11]#011train-error:0.21587#011validation-error:0.23280 [12]#011train-error:0.21373#011validation-error:0.23260 [13]#011train-error:0.20960#011validation-error:0.22820 [14]#011train-error:0.20620#011validation-error:0.22330 [15]#011train-error:0.20573#011validation-error:0.22230 [16]#011train-error:0.20013#011validation-error:0.21970 [17]#011train-error:0.19633#011validation-error:0.21620 [18]#011train-error:0.19420#011validation-error:0.21340 [19]#011train-error:0.19147#011validation-error:0.21090 [20]#011train-error:0.18740#011validation-error:0.20750 [21]#011train-error:0.18507#011validation-error:0.20530 [22]#011train-error:0.18340#011validation-error:0.20510 [23]#011train-error:0.18120#011validation-error:0.20360 [24]#011train-error:0.17960#011validation-error:0.20120 [25]#011train-error:0.17673#011validation-error:0.20130 [26]#011train-error:0.17500#011validation-error:0.19840 [27]#011train-error:0.17327#011validation-error:0.19720 [28]#011train-error:0.16993#011validation-error:0.19490 [29]#011train-error:0.16887#011validation-error:0.19380 [30]#011train-error:0.16613#011validation-error:0.19130 [31]#011train-error:0.16327#011validation-error:0.19320 [32]#011train-error:0.16160#011validation-error:0.19150 [33]#011train-error:0.16147#011validation-error:0.19020 [34]#011train-error:0.15907#011validation-error:0.18840 [35]#011train-error:0.15627#011validation-error:0.18680 [36]#011train-error:0.15427#011validation-error:0.18570 [37]#011train-error:0.15427#011validation-error:0.18350 [38]#011train-error:0.15400#011validation-error:0.18220 [39]#011train-error:0.15247#011validation-error:0.18140 [40]#011train-error:0.15047#011validation-error:0.18100 [41]#011train-error:0.14860#011validation-error:0.18050 [42]#011train-error:0.14760#011validation-error:0.17990 [43]#011train-error:0.14507#011validation-error:0.18010 [44]#011train-error:0.14453#011validation-error:0.17850 [45]#011train-error:0.14313#011validation-error:0.17790 [46]#011train-error:0.14327#011validation-error:0.17700 [47]#011train-error:0.14253#011validation-error:0.17800 [48]#011train-error:0.14107#011validation-error:0.17750 [49]#011train-error:0.13947#011validation-error:0.17640 [50]#011train-error:0.13800#011validation-error:0.17440 [51]#011train-error:0.13613#011validation-error:0.17310 [52]#011train-error:0.13587#011validation-error:0.17260 [53]#011train-error:0.13487#011validation-error:0.17210 [54]#011train-error:0.13280#011validation-error:0.17240 [55]#011train-error:0.13253#011validation-error:0.17180 [56]#011train-error:0.13053#011validation-error:0.17030 [57]#011train-error:0.12953#011validation-error:0.16980 [58]#011train-error:0.12873#011validation-error:0.17000 [59]#011train-error:0.12807#011validation-error:0.17110 [60]#011train-error:0.12760#011validation-error:0.17020 [61]#011train-error:0.12700#011validation-error:0.16950 [62]#011train-error:0.12767#011validation-error:0.16850 [63]#011train-error:0.12640#011validation-error:0.16910 [64]#011train-error:0.12593#011validation-error:0.16780 [65]#011train-error:0.12493#011validation-error:0.16830 [66]#011train-error:0.12440#011validation-error:0.16730 [67]#011train-error:0.12340#011validation-error:0.16590 [68]#011train-error:0.12207#011validation-error:0.16590 [69]#011train-error:0.12167#011validation-error:0.16540 [70]#011train-error:0.12093#011validation-error:0.16460 [71]#011train-error:0.12000#011validation-error:0.16410 [72]#011train-error:0.11907#011validation-error:0.16240 [73]#011train-error:0.11867#011validation-error:0.16080 [74]#011train-error:0.11833#011validation-error:0.16020 [75]#011train-error:0.11773#011validation-error:0.16080 [76]#011train-error:0.11707#011validation-error:0.16150 [77]#011train-error:0.11607#011validation-error:0.16060 [78]#011train-error:0.11620#011validation-error:0.16010 [79]#011train-error:0.11633#011validation-error:0.16000 [80]#011train-error:0.11567#011validation-error:0.15960 [81]#011train-error:0.11400#011validation-error:0.15910 [82]#011train-error:0.11267#011validation-error:0.15930 [83]#011train-error:0.11280#011validation-error:0.15890 [84]#011train-error:0.11227#011validation-error:0.15850 [85]#011train-error:0.11133#011validation-error:0.15790 [86]#011train-error:0.11073#011validation-error:0.15790 [87]#011train-error:0.11133#011validation-error:0.15810 [88]#011train-error:0.11167#011validation-error:0.15820 [89]#011train-error:0.11140#011validation-error:0.15730 [90]#011train-error:0.11047#011validation-error:0.15740 [91]#011train-error:0.11013#011validation-error:0.15690 [92]#011train-error:0.10980#011validation-error:0.15720 [93]#011train-error:0.10893#011validation-error:0.15640 [94]#011train-error:0.10827#011validation-error:0.15580 [95]#011train-error:0.10793#011validation-error:0.15660 [96]#011train-error:0.10780#011validation-error:0.15780 [97]#011train-error:0.10707#011validation-error:0.15750 [98]#011train-error:0.10633#011validation-error:0.15710 [99]#011train-error:0.10567#011validation-error:0.15630 [100]#011train-error:0.10533#011validation-error:0.15580 [101]#011train-error:0.10487#011validation-error:0.15550 [102]#011train-error:0.10460#011validation-error:0.15490 [103]#011train-error:0.10380#011validation-error:0.15380 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output WARNING:sagemaker:Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .....................[2020-07-09:04:51:54:INFO] No GPUs detected (normal if no gpus installed) [2020-07-09:04:51:54:INFO] No GPUs detected (normal if no gpus installed) [2020-07-09:04:51:54:INFO] nginx config:  worker_processes auto; daemon off; pid /tmp/nginx.pid; error_log /dev/stderr;  worker_rlimit_nofile 4096;  events { worker_connections 2048; }  http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /dev/stdout combined; upstream gunicorn { server unix:/tmp/gunicorn.sock; } server { listen 8080 deferred; client_max_body_size 0; keepalive_timeout 3; location ~ ^/(ping|invocations|execution-parameters) { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 60s; proxy_pass http://gunicorn; } location / { return 404 "{}"; } } }  [2020-07-09 04:51:55 +0000] [23] [INFO] Starting gunicorn 19.10.0 [2020-07-09 04:51:55 +0000] [23] [INFO] Listening at: unix:/tmp/gunicorn.sock (23) [2020-07-09 04:51:55 +0000] [23] [INFO] Using worker: gevent [2020-07-09 04:51:55 +0000] [26] [INFO] Booting worker with pid: 26 [2020-07-09 04:51:55 +0000] [27] [INFO] Booting worker with pid: 27 [2020-07-09 04:51:55 +0000] [28] [INFO] Booting worker with pid: 28 [2020-07-09 04:51:55 +0000] [29] [INFO] Booting worker with pid: 29 [2020-07-09:04:52:07:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [09/Jul/2020:04:52:07 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" [2020-07-09:04:52:07:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [09/Jul/2020:04:52:07 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-07-09:04:52:07:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [09/Jul/2020:04:52:07 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" [2020-07-09:04:52:07:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [09/Jul/2020:04:52:07 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-07-09:04:52:09:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:09:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:09:INFO] No GPUs detected (normal if no gpus installed) [2020-07-09:04:52:09:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:10:INFO] No GPUs detected (normal if no gpus installed) [2020-07-09:04:52:10:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:10:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:09:INFO] No GPUs detected (normal if no gpus installed) [2020-07-09:04:52:09:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:10:INFO] No GPUs detected (normal if no gpus installed) [2020-07-09:04:52:10:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:10:INFO] Determined delimiter of CSV input is ',' 2020-07-09T04:52:07.252:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD 169.254.255.130 - - [09/Jul/2020:04:52:13 +0000] "POST /invocations HTTP/1.1" 200 12195 "-" "Go-http-client/1.1" [2020-07-09:04:52:13:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:13 +0000] "POST /invocations HTTP/1.1" 200 12195 "-" "Go-http-client/1.1" [2020-07-09:04:52:13:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:14:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:14:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:16 +0000] "POST /invocations HTTP/1.1" 200 12178 "-" "Go-http-client/1.1" [2020-07-09:04:52:16:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:17 +0000] "POST /invocations HTTP/1.1" 200 12195 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:04:52:16 +0000] "POST /invocations HTTP/1.1" 200 12178 "-" "Go-http-client/1.1" [2020-07-09:04:52:16:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:17 +0000] "POST /invocations HTTP/1.1" 200 12195 "-" "Go-http-client/1.1" [2020-07-09:04:52:17:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:17 +0000] "POST /invocations HTTP/1.1" 200 12210 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:04:52:17 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" [2020-07-09:04:52:17:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:17:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:17:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:17 +0000] "POST /invocations HTTP/1.1" 200 12210 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:04:52:17 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" [2020-07-09:04:52:17:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:17:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:20 +0000] "POST /invocations HTTP/1.1" 200 12270 "-" "Go-http-client/1.1" [2020-07-09:04:52:20:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:20 +0000] "POST /invocations HTTP/1.1" 200 12232 "-" "Go-http-client/1.1" [2020-07-09:04:52:21:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:21 +0000] "POST /invocations HTTP/1.1" 200 12246 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:04:52:21 +0000] "POST /invocations HTTP/1.1" 200 12231 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:04:52:20 +0000] "POST /invocations HTTP/1.1" 200 12270 "-" "Go-http-client/1.1" [2020-07-09:04:52:20:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:20 +0000] "POST /invocations HTTP/1.1" 200 12232 "-" "Go-http-client/1.1" [2020-07-09:04:52:21:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:21 +0000] "POST /invocations HTTP/1.1" 200 12246 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:04:52:21 +0000] "POST /invocations HTTP/1.1" 200 12231 "-" "Go-http-client/1.1" [2020-07-09:04:52:21:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:21:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:21:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:21:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:24 +0000] "POST /invocations HTTP/1.1" 200 12199 "-" "Go-http-client/1.1" [2020-07-09:04:52:24:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:24 +0000] "POST /invocations HTTP/1.1" 200 12199 "-" "Go-http-client/1.1" [2020-07-09:04:52:24:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:24 +0000] "POST /invocations HTTP/1.1" 200 12237 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:04:52:24 +0000] "POST /invocations HTTP/1.1" 200 12259 "-" "Go-http-client/1.1" [2020-07-09:04:52:24:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:25:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:24 +0000] "POST /invocations HTTP/1.1" 200 12237 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:04:52:24 +0000] "POST /invocations HTTP/1.1" 200 12259 "-" "Go-http-client/1.1" [2020-07-09:04:52:24:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:25:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:27 +0000] "POST /invocations HTTP/1.1" 200 12211 "-" "Go-http-client/1.1" [2020-07-09:04:52:27:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:28 +0000] "POST /invocations HTTP/1.1" 200 12284 "-" "Go-http-client/1.1" [2020-07-09:04:52:28:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:27 +0000] "POST /invocations HTTP/1.1" 200 12211 "-" "Go-http-client/1.1" [2020-07-09:04:52:27:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:28 +0000] "POST /invocations HTTP/1.1" 200 12284 "-" "Go-http-client/1.1" [2020-07-09:04:52:28:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:28 +0000] "POST /invocations HTTP/1.1" 200 12240 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:04:52:28 +0000] "POST /invocations HTTP/1.1" 200 12177 "-" "Go-http-client/1.1" [2020-07-09:04:52:28:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:28:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:28 +0000] "POST /invocations HTTP/1.1" 200 12240 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:04:52:28 +0000] "POST /invocations HTTP/1.1" 200 12177 "-" "Go-http-client/1.1" [2020-07-09:04:52:28:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:28:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:31 +0000] "POST /invocations HTTP/1.1" 200 12203 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:04:52:31 +0000] "POST /invocations HTTP/1.1" 200 12203 "-" "Go-http-client/1.1" [2020-07-09:04:52:31:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:31 +0000] "POST /invocations HTTP/1.1" 200 12222 "-" "Go-http-client/1.1" [2020-07-09:04:52:31:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:32 +0000] "POST /invocations HTTP/1.1" 200 12241 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:04:52:32 +0000] "POST /invocations HTTP/1.1" 200 12240 "-" "Go-http-client/1.1" [2020-07-09:04:52:32:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:31:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:31 +0000] "POST /invocations HTTP/1.1" 200 12222 "-" "Go-http-client/1.1" [2020-07-09:04:52:31:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:04:52:32 +0000] "POST /invocations HTTP/1.1" 200 12241 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:04:52:32 +0000] "POST /invocations HTTP/1.1" 200 12240 "-" "Go-http-client/1.1" [2020-07-09:04:52:32:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:32:INFO] Determined delimiter of CSV input is ',' [2020-07-09:04:52:32:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive s3://sagemaker-us-west-2-594602753794/sagemaker-xgboost-2020-07-09-02-42-23-987 $data_dir ###Output Completed 256.0 KiB/470.2 KiB (2.1 MiB/s) with 1 file(s) remaining Completed 470.2 KiB/470.2 KiB (3.6 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-west-2-594602753794/sagemaker-xgboost-2020-07-09-02-42-23-987/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.fit_transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ....................[2020-07-09:02:46:00:INFO] No GPUs detected (normal if no gpus installed) [2020-07-09:02:46:00:INFO] No GPUs detected (normal if no gpus installed) [2020-07-09:02:46:00:INFO] nginx config:  worker_processes auto; daemon off; pid /tmp/nginx.pid; error_log /dev/stderr;  worker_rlimit_nofile 4096;  events { worker_connections 2048; }  http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /dev/stdout combined; upstream gunicorn { server unix:/tmp/gunicorn.sock; } server { listen 8080 deferred; client_max_body_size 0; keepalive_timeout 3; location ~ ^/(ping|invocations|execution-parameters) { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 60s; proxy_pass http://gunicorn; } location / { return 404 "{}"; } } }  [2020-07-09 02:46:00 +0000] [19] [INFO] Starting gunicorn 19.10.0 [2020-07-09 02:46:00 +0000] [19] [INFO] Listening at: unix:/tmp/gunicorn.sock (19) [2020-07-09 02:46:00 +0000] [19] [INFO] Using worker: gevent [2020-07-09 02:46:00 +0000] [26] [INFO] Booting worker with pid: 26 [2020-07-09 02:46:00 +0000] [27] [INFO] Booting worker with pid: 27 [2020-07-09 02:46:00 +0000] [28] [INFO] Booting worker with pid: 28 [2020-07-09 02:46:00 +0000] [29] [INFO] Booting worker with pid: 29 [2020-07-09:02:46:16:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [09/Jul/2020:02:46:16 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" [2020-07-09:02:46:16:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [09/Jul/2020:02:46:16 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-07-09:02:46:19:INFO] No GPUs detected (normal if no gpus installed) [2020-07-09:02:46:19:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:19:INFO] No GPUs detected (normal if no gpus installed) [2020-07-09:02:46:19:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:19:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:19:INFO] Determined delimiter of CSV input is ',' 2020-07-09T02:46:16.333:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD 169.254.255.130 - - [09/Jul/2020:02:46:22 +0000] "POST /invocations HTTP/1.1" 200 12093 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:22 +0000] "POST /invocations HTTP/1.1" 200 12130 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:22 +0000] "POST /invocations HTTP/1.1" 200 12134 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:22 +0000] "POST /invocations HTTP/1.1" 200 12104 "-" "Go-http-client/1.1" [2020-07-09:02:46:23:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:23:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:23:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:23:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:02:46:22 +0000] "POST /invocations HTTP/1.1" 200 12093 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:22 +0000] "POST /invocations HTTP/1.1" 200 12130 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:22 +0000] "POST /invocations HTTP/1.1" 200 12134 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:22 +0000] "POST /invocations HTTP/1.1" 200 12104 "-" "Go-http-client/1.1" [2020-07-09:02:46:23:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:23:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:23:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:23:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:02:46:26 +0000] "POST /invocations HTTP/1.1" 200 12090 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:26 +0000] "POST /invocations HTTP/1.1" 200 12113 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:26 +0000] "POST /invocations HTTP/1.1" 200 12145 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:26 +0000] "POST /invocations HTTP/1.1" 200 12139 "-" "Go-http-client/1.1" [2020-07-09:02:46:26:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:26:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:26:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:27:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:02:46:26 +0000] "POST /invocations HTTP/1.1" 200 12090 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:26 +0000] "POST /invocations HTTP/1.1" 200 12113 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:26 +0000] "POST /invocations HTTP/1.1" 200 12145 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:26 +0000] "POST /invocations HTTP/1.1" 200 12139 "-" "Go-http-client/1.1" [2020-07-09:02:46:26:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:26:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:26:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:27:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:32:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:32:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:02:46:33 +0000] "POST /invocations HTTP/1.1" 200 12106 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:33 +0000] "POST /invocations HTTP/1.1" 200 12106 "-" "Go-http-client/1.1" [2020-07-09:02:46:33:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:02:46:33 +0000] "POST /invocations HTTP/1.1" 200 12099 "-" "Go-http-client/1.1" [2020-07-09:02:46:34:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:34:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:33:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:02:46:33 +0000] "POST /invocations HTTP/1.1" 200 12099 "-" "Go-http-client/1.1" [2020-07-09:02:46:34:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:34:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:02:46:35 +0000] "POST /invocations HTTP/1.1" 200 12108 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:35 +0000] "POST /invocations HTTP/1.1" 200 12108 "-" "Go-http-client/1.1" [2020-07-09:02:46:35:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:35:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:02:46:36 +0000] "POST /invocations HTTP/1.1" 200 12108 "-" "Go-http-client/1.1" [2020-07-09:02:46:36:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:02:46:36 +0000] "POST /invocations HTTP/1.1" 200 12108 "-" "Go-http-client/1.1" [2020-07-09:02:46:36:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:02:46:37 +0000] "POST /invocations HTTP/1.1" 200 12100 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:37 +0000] "POST /invocations HTTP/1.1" 200 12120 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:37 +0000] "POST /invocations HTTP/1.1" 200 12100 "-" "Go-http-client/1.1" 169.254.255.130 - - [09/Jul/2020:02:46:37 +0000] "POST /invocations HTTP/1.1" 200 12120 "-" "Go-http-client/1.1" [2020-07-09:02:46:37:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:37:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:37:INFO] Determined delimiter of CSV input is ',' [2020-07-09:02:46:37:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:02:46:38 +0000] "POST /invocations HTTP/1.1" 200 12141 "-" "Go-http-client/1.1" [2020-07-09:02:46:39:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [09/Jul/2020:02:46:38 +0000] "POST /invocations HTTP/1.1" 200 12141 "-" "Go-http-client/1.1" [2020-07-09:02:46:39:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive s3://sagemaker-us-west-2-594602753794/sagemaker-xgboost-2020-07-09-02-42-23-987 $data_dir ###Output Completed 256.0 KiB/470.2 KiB (2.3 MiB/s) with 1 file(s) remaining Completed 470.2 KiB/470.2 KiB (4.1 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-west-2-594602753794/sagemaker-xgboost-2020-07-09-02-42-23-987/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code from sklearn.metrics import accuracy_score accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output WARNING:sagemaker:Parameter image will be renamed to image_uri in SageMaker Python SDK v2. WARNING:sagemaker:Using already existing model: sagemaker-xgboost-2020-07-09-04-40-29-637 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['low', 'budget', 'enterpris', 'filmmak', 'manufactur', 'distribut', 'dvd', 'perhap', 'expect', 'much', 'broken', 'disc', 'form', 'yet', 'remark', 'whole', 'achiev', 'fact', 'releas', 'come', 'enough', 'extra', 'shame', 'jame', 'cameron', 'dvd', 'decidedli', 'fine', 'present', 'regard', 'latter', 'major', 'flaw', 'broken', 'come', 'non', 'anamorph', 'transfer', 'otherwis', 'get', 'film', 'origin', '1', '85', '1', 'ratio', 'demonstr', 'technic', 'flaw', 'look', 'pretti', 'much', 'expect', 'inde', 'given', 'ferrari', 'hand', 'approach', 'put', 'disc', 'togeth', 'pretti', 'much', 'guarante', 'fact', 'also', 'true', 'soundtrack', 'offer', 'dd2', '0', 'dd5', '1', 'mix', 'whilst', 'uncertain', 'deem', 'origin', 'fact', 'ferrari', 'involv', 'mean', 'neither', 'consid', 'inferior', 'inde', 'though', 'dd5', '1', 'may', 'offer', 'atmospher', 'view', 'experi', 'owe', 'manner', 'util', 'score', 'equal', 'fine', 'free', 'technic', 'flaw', 'extra', 'disc', 'posit', 'overwhelm', 'take', 'look', 'sidebar', 'right', 'screen', 'notic', 'numer', 'commentari', 'load', 'featurett', 'variou', 'galleri', 'inde', 'given', 'manner', 'everyth', 'broken', 'minut', 'chunk', 'rather', 'compil', 'lengthi', 'documentari', 'realli', 'littl', 'discuss', 'anatomi', 'stunt', 'featurett', 'exampl', 'exactli', 'claim', 'goe', 'rest', 'piec', 'get', 'coverag', 'pretti', 'much', 'ever', 'aspect', 'broken', 'pre', 'product', 'product', 'post', 'product', 'whilst', 'may', 'prefer', 'find', 'easili', 'digest', 'overal', 'make', 'manner', 'get', 'easi', 'access', 'whatev', 'special', 'featur', 'may', 'wish', 'view', 'variou', 'piec', 'perhap', 'commentari', 'need', 'kind', 'discuss', 'also', 'predict', 'air', 'chat', 'track', 'one', 'involv', 'actor', 'overli', 'jokey', 'take', 'film', 'serious', 'ferrari', 'piec', 'incred', 'enthusiast', 'whole', 'thing', 'technic', 'one', 'well', 'extrem', 'technic', 'cours', 'also', 'get', 'crossov', 'cover', 'elsewher', 'disc', '19', 'minut', 'none', 'piec', 'outstay', 'welcom', 'inde', 'fine', 'extra', 'packag'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:507: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output _____no_output_____ ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output _____no_output_____ ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-05-10 14:58:53-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 23.7MB/s in 4.2s 2020-05-10 14:58:57 (19.2 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Wrote preprocessed data to cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost', repo_version = '1.0-1') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-05-10 15:54:59 Starting - Starting the training job... 2020-05-10 15:55:01 Starting - Launching requested ML instances...... 2020-05-10 15:56:01 Starting - Preparing the instances for training... 2020-05-10 15:57:00 Downloading - Downloading input data...... 2020-05-10 15:57:52 Training - Training image download completed. Training in progress.INFO:sagemaker-containers:Imported framework sagemaker_xgboost_container.training INFO:sagemaker-containers:Failed to parse hyperparameter objective value binary:logistic to Json. Returning the value itself INFO:sagemaker-containers:No GPUs detected (normal if no gpus installed) INFO:sagemaker_xgboost_container.training:Running XGBoost Sagemaker in algorithm mode INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' [15:57:57] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, INFO:root:Determined delimiter of CSV input is ',' [15:57:59] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, INFO:root:Single node training. INFO:root:Train matrix has 15000 rows INFO:root:Validation matrix has 10000 rows [15:57:59] WARNING: /workspace/src/learner.cc:328:  Parameters: { early_stopping_rounds, num_round, silent } might not be used. This may not be accurate due to some parameters are only used in language bindings but passed down to XGBoost core. Or some parameters are not used but slip through this verification. Please open an issue if you find above cases.  [0]#011train-error:0.29633#011validation-error:0.29610 [1]#011train-error:0.28240#011validation-error:0.28120 [2]#011train-error:0.27367#011validation-error:0.27410 [3]#011train-error:0.26547#011validation-error:0.26810 [4]#011train-error:0.26053#011validation-error:0.26640 [5]#011train-error:0.25540#011validation-error:0.26130 [6]#011train-error:0.24920#011validation-error:0.25200 [7]#011train-error:0.24500#011validation-error:0.24940 [8]#011train-error:0.23473#011validation-error:0.24530 [9]#011train-error:0.22720#011validation-error:0.23830 [10]#011train-error:0.22253#011validation-error:0.23360 [11]#011train-error:0.21920#011validation-error:0.22910 [12]#011train-error:0.21213#011validation-error:0.22680 [13]#011train-error:0.20840#011validation-error:0.22320 [14]#011train-error:0.20267#011validation-error:0.21760 [15]#011train-error:0.19993#011validation-error:0.21470 [16]#011train-error:0.19820#011validation-error:0.21370 [17]#011train-error:0.19520#011validation-error:0.21170 [18]#011train-error:0.19327#011validation-error:0.20870 [19]#011train-error:0.19167#011validation-error:0.20650 [20]#011train-error:0.18933#011validation-error:0.20430 [21]#011train-error:0.18553#011validation-error:0.20310 [22]#011train-error:0.18340#011validation-error:0.20090 [23]#011train-error:0.17953#011validation-error:0.19770 [24]#011train-error:0.17960#011validation-error:0.19520 [25]#011train-error:0.17653#011validation-error:0.19300 [26]#011train-error:0.17453#011validation-error:0.19160 [27]#011train-error:0.17000#011validation-error:0.18950 [28]#011train-error:0.16807#011validation-error:0.18840 [29]#011train-error:0.16620#011validation-error:0.18690 [30]#011train-error:0.16540#011validation-error:0.18520 [31]#011train-error:0.16380#011validation-error:0.18300 [32]#011train-error:0.16227#011validation-error:0.18180 [33]#011train-error:0.16093#011validation-error:0.18170 [34]#011train-error:0.15987#011validation-error:0.18050 [35]#011train-error:0.15720#011validation-error:0.17890 [36]#011train-error:0.15620#011validation-error:0.17730 [37]#011train-error:0.15407#011validation-error:0.17670 [38]#011train-error:0.15347#011validation-error:0.17710 [39]#011train-error:0.15233#011validation-error:0.17650 [40]#011train-error:0.15127#011validation-error:0.17700 [41]#011train-error:0.14960#011validation-error:0.17560 [42]#011train-error:0.14847#011validation-error:0.17320 [43]#011train-error:0.14833#011validation-error:0.17360 [44]#011train-error:0.14627#011validation-error:0.17240 [45]#011train-error:0.14420#011validation-error:0.17220 [46]#011train-error:0.14400#011validation-error:0.17110 [47]#011train-error:0.14340#011validation-error:0.17110 [48]#011train-error:0.14267#011validation-error:0.17080 [49]#011train-error:0.14233#011validation-error:0.16970 [50]#011train-error:0.14140#011validation-error:0.16930 [51]#011train-error:0.14007#011validation-error:0.16730 [52]#011train-error:0.13927#011validation-error:0.16760 [53]#011train-error:0.13780#011validation-error:0.16670 [54]#011train-error:0.13780#011validation-error:0.16560 [55]#011train-error:0.13627#011validation-error:0.16510 [56]#011train-error:0.13620#011validation-error:0.16460 [57]#011train-error:0.13407#011validation-error:0.16550 [58]#011train-error:0.13260#011validation-error:0.16510 [59]#011train-error:0.13247#011validation-error:0.16430 [60]#011train-error:0.13173#011validation-error:0.16370 [61]#011train-error:0.13193#011validation-error:0.16330 [62]#011train-error:0.13173#011validation-error:0.16250 [63]#011train-error:0.13160#011validation-error:0.16270 [64]#011train-error:0.13127#011validation-error:0.16250 [65]#011train-error:0.13047#011validation-error:0.16160 [66]#011train-error:0.13013#011validation-error:0.16140 [67]#011train-error:0.12967#011validation-error:0.16140 [68]#011train-error:0.12920#011validation-error:0.16090 [69]#011train-error:0.12813#011validation-error:0.16020 [70]#011train-error:0.12780#011validation-error:0.15960 [71]#011train-error:0.12667#011validation-error:0.16030 [72]#011train-error:0.12560#011validation-error:0.15930 [73]#011train-error:0.12500#011validation-error:0.15830 [74]#011train-error:0.12387#011validation-error:0.15700 [75]#011train-error:0.12273#011validation-error:0.15670 [76]#011train-error:0.12273#011validation-error:0.15660 [77]#011train-error:0.12280#011validation-error:0.15680 [78]#011train-error:0.12233#011validation-error:0.15620 [79]#011train-error:0.12053#011validation-error:0.15590 [80]#011train-error:0.12027#011validation-error:0.15550 [81]#011train-error:0.12000#011validation-error:0.15500 [82]#011train-error:0.11947#011validation-error:0.15470 [83]#011train-error:0.11860#011validation-error:0.15430 [84]#011train-error:0.11780#011validation-error:0.15370 [85]#011train-error:0.11693#011validation-error:0.15420 [86]#011train-error:0.11640#011validation-error:0.15400 [87]#011train-error:0.11547#011validation-error:0.15360 [88]#011train-error:0.11467#011validation-error:0.15300 [89]#011train-error:0.11433#011validation-error:0.15210 [90]#011train-error:0.11420#011validation-error:0.15120 [91]#011train-error:0.11367#011validation-error:0.15110 [92]#011train-error:0.11353#011validation-error:0.15010 [93]#011train-error:0.11240#011validation-error:0.14990 [94]#011train-error:0.11187#011validation-error:0.15070 [95]#011train-error:0.11227#011validation-error:0.15100 [96]#011train-error:0.11180#011validation-error:0.15050 [97]#011train-error:0.11193#011validation-error:0.15030 [98]#011train-error:0.11147#011validation-error:0.14960 [99]#011train-error:0.11167#011validation-error:0.14920 [100]#011train-error:0.11173#011validation-error:0.14890 [101]#011train-error:0.11087#011validation-error:0.14880 [102]#011train-error:0.11060#011validation-error:0.14830 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .....................[2020-05-10 16:04:51 +0000] [16] [INFO] Starting gunicorn 19.10.0 [2020-05-10 16:04:51 +0000] [16] [INFO] Listening at: unix:/tmp/gunicorn.sock (16) [2020-05-10 16:04:51 +0000] [16] [INFO] Using worker: gevent [2020-05-10 16:04:51 +0000] [23] [INFO] Booting worker with pid: 23 [2020-05-10 16:04:51 +0000] [25] [INFO] Booting worker with pid: 25 [2020-05-10 16:04:51 +0000] [24] [INFO] Booting worker with pid: 24 [2020-05-10 16:04:51 +0000] [26] [INFO] Booting worker with pid: 26 [2020-05-10:16:04:56:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [10/May/2020:16:04:56 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" [2020-05-10:16:04:56:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [10/May/2020:16:04:56 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" 2020-05-10T16:04:56.717:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-10:16:04:59:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:04:59:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:16:04:59:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:16:04:59:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:04:59:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:04:59:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:16:04:59:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:16:04:59:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:04:59:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:04:59:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:04:59:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:04:59:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:03 +0000] "POST /invocations HTTP/1.1" 200 12183 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:03 +0000] "POST /invocations HTTP/1.1" 200 12183 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:03 +0000] "POST /invocations HTTP/1.1" 200 12217 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:03 +0000] "POST /invocations HTTP/1.1" 200 12160 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:03 +0000] "POST /invocations HTTP/1.1" 200 12196 "-" "Go-http-client/1.1" [2020-05-10:16:05:03:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:03:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:03:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:03:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:03 +0000] "POST /invocations HTTP/1.1" 200 12217 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:03 +0000] "POST /invocations HTTP/1.1" 200 12160 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:03 +0000] "POST /invocations HTTP/1.1" 200 12196 "-" "Go-http-client/1.1" [2020-05-10:16:05:03:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:03:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:03:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:03:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:07:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:07:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:10 +0000] "POST /invocations HTTP/1.1" 200 12160 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:10 +0000] "POST /invocations HTTP/1.1" 200 12185 "-" "Go-http-client/1.1" [2020-05-10:16:05:10:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:10 +0000] "POST /invocations HTTP/1.1" 200 12198 "-" "Go-http-client/1.1" [2020-05-10:16:05:10:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:10 +0000] "POST /invocations HTTP/1.1" 200 12160 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:10 +0000] "POST /invocations HTTP/1.1" 200 12185 "-" "Go-http-client/1.1" [2020-05-10:16:05:10:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:10 +0000] "POST /invocations HTTP/1.1" 200 12198 "-" "Go-http-client/1.1" [2020-05-10:16:05:10:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:10 +0000] "POST /invocations HTTP/1.1" 200 12182 "-" "Go-http-client/1.1" [2020-05-10:16:05:11:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:11:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:10 +0000] "POST /invocations HTTP/1.1" 200 12182 "-" "Go-http-client/1.1" [2020-05-10:16:05:11:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:11:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:14 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:14 +0000] "POST /invocations HTTP/1.1" 200 12127 "-" "Go-http-client/1.1" [2020-05-10:16:05:14:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:14 +0000] "POST /invocations HTTP/1.1" 200 12184 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:14 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" [2020-05-10:16:05:14:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:14:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:14:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:14 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:14 +0000] "POST /invocations HTTP/1.1" 200 12127 "-" "Go-http-client/1.1" [2020-05-10:16:05:14:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:14 +0000] "POST /invocations HTTP/1.1" 200 12184 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:14 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" [2020-05-10:16:05:14:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:14:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:14:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:17 +0000] "POST /invocations HTTP/1.1" 200 12180 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:17 +0000] "POST /invocations HTTP/1.1" 200 12180 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:18 +0000] "POST /invocations HTTP/1.1" 200 12160 "-" "Go-http-client/1.1" [2020-05-10:16:05:18:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:18 +0000] "POST /invocations HTTP/1.1" 200 12216 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:18 +0000] "POST /invocations HTTP/1.1" 200 12220 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:18 +0000] "POST /invocations HTTP/1.1" 200 12160 "-" "Go-http-client/1.1" [2020-05-10:16:05:18:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:05:18 +0000] "POST /invocations HTTP/1.1" 200 12216 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:05:18 +0000] "POST /invocations HTTP/1.1" 200 12220 "-" "Go-http-client/1.1" [2020-05-10:16:05:18:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:18:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:18:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:18:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:18:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:05:18:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/472.8 KiB (2.3 MiB/s) with 1 file(s) remaining Completed 472.8 KiB/472.8 KiB (4.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-053596391548/sagemaker-xgboost-2020-05-10-16-01-27-049/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(lowercase = True, preprocessor = lambda x: x, tokenizer = lambda x: x, vocabulary = vocabulary) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location prefix = 'sentiment-update' new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_X = new_XV = None ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ........................[2020-05-10 16:25:19 +0000] [16] [INFO] Starting gunicorn 19.10.0 [2020-05-10 16:25:19 +0000] [16] [INFO] Listening at: unix:/tmp/gunicorn.sock (16) [2020-05-10 16:25:19 +0000] [16] [INFO] Using worker: gevent [2020-05-10 16:25:19 +0000] [23] [INFO] Booting worker with pid: 23 [2020-05-10 16:25:19 +0000] [24] [INFO] Booting worker with pid: 24 [2020-05-10 16:25:19 +0000] [28] [INFO] Booting worker with pid: 28 [2020-05-10 16:25:19 +0000] [29] [INFO] Booting worker with pid: 29 [2020-05-10:16:25:42:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [10/May/2020:16:25:42 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:42 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-05-10:16:25:42:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [10/May/2020:16:25:42 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:42 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-05-10:16:25:44:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:16:25:44:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:16:25:45:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:45:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:45:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:44:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:16:25:44:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:16:25:45:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:45:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:45:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:45:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:16:25:45:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:45:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:16:25:45:INFO] Determined delimiter of CSV input is ',' 2020-05-10T16:25:42.166:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD 169.254.255.130 - - [10/May/2020:16:25:48 +0000] "POST /invocations HTTP/1.1" 200 12169 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:48 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:48 +0000] "POST /invocations HTTP/1.1" 200 12159 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:48 +0000] "POST /invocations HTTP/1.1" 200 12203 "-" "Go-http-client/1.1" [2020-05-10:16:25:48:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:25:48 +0000] "POST /invocations HTTP/1.1" 200 12169 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:48 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:48 +0000] "POST /invocations HTTP/1.1" 200 12159 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:48 +0000] "POST /invocations HTTP/1.1" 200 12203 "-" "Go-http-client/1.1" [2020-05-10:16:25:48:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:48:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:48:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:48:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:48:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:48:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:48:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:25:52 +0000] "POST /invocations HTTP/1.1" 200 12179 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:52 +0000] "POST /invocations HTTP/1.1" 200 12146 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:52 +0000] "POST /invocations HTTP/1.1" 200 12179 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:52 +0000] "POST /invocations HTTP/1.1" 200 12193 "-" "Go-http-client/1.1" [2020-05-10:16:25:52:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:52:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:52:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:25:52 +0000] "POST /invocations HTTP/1.1" 200 12179 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:52 +0000] "POST /invocations HTTP/1.1" 200 12146 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:52 +0000] "POST /invocations HTTP/1.1" 200 12179 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:52 +0000] "POST /invocations HTTP/1.1" 200 12193 "-" "Go-http-client/1.1" [2020-05-10:16:25:52:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:52:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:52:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:52:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:52:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:25:55 +0000] "POST /invocations HTTP/1.1" 200 12190 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:56 +0000] "POST /invocations HTTP/1.1" 200 12171 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:56 +0000] "POST /invocations HTTP/1.1" 200 12158 "-" "Go-http-client/1.1" [2020-05-10:16:25:56:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:56:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:56:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:56:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:25:55 +0000] "POST /invocations HTTP/1.1" 200 12190 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:56 +0000] "POST /invocations HTTP/1.1" 200 12171 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:56 +0000] "POST /invocations HTTP/1.1" 200 12158 "-" "Go-http-client/1.1" [2020-05-10:16:25:56:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:56:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:56:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:56:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:16:25:59 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:59 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:59 +0000] "POST /invocations HTTP/1.1" 200 12192 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:59 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:59 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:59 +0000] "POST /invocations HTTP/1.1" 200 12192 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:59 +0000] "POST /invocations HTTP/1.1" 200 12181 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:16:25:59 +0000] "POST /invocations HTTP/1.1" 200 12181 "-" "Go-http-client/1.1" [2020-05-10:16:25:59:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:59:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:59:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:26:00:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:59:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:59:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:25:59:INFO] Determined delimiter of CSV input is ',' [2020-05-10:16:26:00:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/473.0 KiB (3.2 MiB/s) with 1 file(s) remaining Completed 473.0 KiB/473.0 KiB (5.7 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-053596391548/sagemaker-xgboost-2020-05-10-16-21-43-155/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code from time import gmtime, strftime # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_model_name = "IMDB-update-xgboost-model" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # Create the inference container xgb_primary_container = { "Image": container, "ModelDataUrl": xgb.model_data } xgb_model_info = session.sagemaker_client.create_model( ModelName = xgb_model_name, ExecutionRoleArn = role, PrimaryContainer = xgb_primary_container) # Create the endpoint configuration xgb_endpoint_config_name = "IMDB-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": xgb_model_name, "VariantName": "XGB-Model" }]) # Then deploying the endpoints endpoint_name = "IMDB-update-endpoint-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # And then we can deploy our endpoint xgb_predictor = session.sagemaker_client.create_endpoint( EndpointName = endpoint_name, EndpointConfigName = xgb_endpoint_config_name) predictor_dec = session.wait_for_endpoint(endpoint_name) endpoint_name ###Output _____no_output_____ ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): response = session.sagemaker_runtime_client.invoke_endpoint( EndpointName = endpoint_name, ContentType = 'text/csv', Body = ','.join(map(str, in_XV[idx]))) result = round(float(response['Body'].read().decode("utf-8"))) if result != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['privat', 'practic', 'suppos', 'medic', 'drama', 'guess', 'biggest', 'complaint', 'lack', 'origin', 'medic', 'stori', 'line', 'watch', 'hous', 'solv', 'two', 'nine', 'medic', 'mysteri', 'doctor', 'booor', 'serious', 'lazi', 'writer', 'copi', 'case', 'older', 'er', 'episod', 'obscur', 'brazilian', 'medic', 'soap', 'hous', 'recent', 'popular', 'recycl', 'idea', 'hard', 'get', 'away', 'second', 'biggest', 'complaint', 'peopl', 'suppos', 'forti', 'someth', 'right', 'behav', 'emot', 'matur', '15', 'year', 'old', 'three', 'week', 'ie', 'three', 'whole', 'damn', 'episod', 'intens', 'think', 'realli', 'necessari', 'understand', 'best', 'friend', 'want', 'friend', 'benefit', 'mayb', 'want', 'hurt', 'want', 'risk', 'friendship', 'charact', 'think', 'psychiatrist', 'way', 'whole', 'storylin', 'unrealist', 'realli', 'buy', 'suppos', 'drama', 'even', 'start', 'complain', 'show', 'everyon', 'favorit', 'addison', 'got', 'know', 'grey', 'anatomi', 'sidenot', 'think', 'funni', 'way', 'addison', 'end', 'lust', 'loser', 'pete', 'sorri', 'everyon', 'tri', 'cure', 'insomnia', 'mozart', 'requiem', 'loser', 'phd', 'derek', 'end', 'entangl', 'relationship', 'whini', 'irrit', 'meredith', 'mile', 'away', 'raini', 'seattl', 'apart', 'littl', 'fling', 'mark', 'seem', 'perfect', 'sometim', 'think', 'shonda', 'rhime', 'subconsci', 'tri', 'tell', 'us', 'relationship', 'first', 'choic', 'often', 'right', 'one'], '', 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:507: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizer` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'reincarn', 'ghetto', 'spill', 'weari', 'victorian', 'playboy', '21st'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'optimist', 'masterson', 'omin', 'orchestr', 'banana', 'dubiou', 'sophi'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. ###Code np.argsort(new_XV.sum(axis = 0))[::-1][:10] new_XV[:,2757].sum() for idx in [2972, 1750, 3155, 2654, 4519, 1989, 2757, 808, 1944, 3920]: for k, v in new_vectorizer.vocabulary_.items(): if v == idx: print(f"{k} appears {new_XV[:,idx].sum()} times") ###Output movement appears 51695 times filler appears 48190 times omen appears 27741 times like appears 22799 times tim appears 16191 times good appears 15360 times make appears 15207 times charact appears 14178 times gestur appears 14141 times seduc appears 14111 times ###Markdown Why are the words `movement`, `filler`, and `omen` so frequent in the new corpus? Movement appears like 3 times as frequently as `good` ... that's weird. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='text/csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='text/csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-05-10 17:33:45 Starting - Starting the training job... 2020-05-10 17:33:46 Starting - Launching requested ML instances...... 2020-05-10 17:34:50 Starting - Preparing the instances for training...... 2020-05-10 17:36:00 Downloading - Downloading input data... 2020-05-10 17:36:26 Training - Downloading the training image... 2020-05-10 17:36:55 Training - Training image download completed. Training in progress.INFO:sagemaker-containers:Imported framework sagemaker_xgboost_container.training INFO:sagemaker-containers:Failed to parse hyperparameter objective value binary:logistic to Json. Returning the value itself INFO:sagemaker-containers:No GPUs detected (normal if no gpus installed) INFO:sagemaker_xgboost_container.training:Running XGBoost Sagemaker in algorithm mode INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' [17:37:01] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, INFO:root:Determined delimiter of CSV input is ',' [17:37:03] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, INFO:root:Single node training. INFO:root:Train matrix has 15000 rows INFO:root:Validation matrix has 10000 rows [17:37:03] WARNING: /workspace/src/learner.cc:328:  Parameters: { early_stopping_rounds, num_round, silent } might not be used. This may not be accurate due to some parameters are only used in language bindings but passed down to XGBoost core. Or some parameters are not used but slip through this verification. Please open an issue if you find above cases.  [0]#011train-error:0.30787#011validation-error:0.30350 [1]#011train-error:0.29927#011validation-error:0.29850 [2]#011train-error:0.28660#011validation-error:0.28450 [3]#011train-error:0.28147#011validation-error:0.28050 [4]#011train-error:0.26933#011validation-error:0.26620 [5]#011train-error:0.25953#011validation-error:0.26050 [6]#011train-error:0.25827#011validation-error:0.25410 [7]#011train-error:0.25513#011validation-error:0.25340 [8]#011train-error:0.24787#011validation-error:0.25070 [9]#011train-error:0.24500#011validation-error:0.24900 [10]#011train-error:0.23560#011validation-error:0.24250 [11]#011train-error:0.23113#011validation-error:0.23780 [12]#011train-error:0.22393#011validation-error:0.23000 [13]#011train-error:0.22027#011validation-error:0.22710 [14]#011train-error:0.21753#011validation-error:0.22500 [15]#011train-error:0.21420#011validation-error:0.22350 [16]#011train-error:0.21073#011validation-error:0.22090 [17]#011train-error:0.20633#011validation-error:0.21730 [18]#011train-error:0.20380#011validation-error:0.21180 [19]#011train-error:0.20180#011validation-error:0.20930 [20]#011train-error:0.20053#011validation-error:0.20780 [21]#011train-error:0.19800#011validation-error:0.20510 [22]#011train-error:0.19480#011validation-error:0.20190 [23]#011train-error:0.19053#011validation-error:0.20120 [24]#011train-error:0.18813#011validation-error:0.19800 [25]#011train-error:0.18653#011validation-error:0.19540 [26]#011train-error:0.18453#011validation-error:0.19410 [27]#011train-error:0.18387#011validation-error:0.19450 [28]#011train-error:0.18053#011validation-error:0.19480 [29]#011train-error:0.17853#011validation-error:0.19350 [30]#011train-error:0.17660#011validation-error:0.19300 [31]#011train-error:0.17507#011validation-error:0.19210 [32]#011train-error:0.17293#011validation-error:0.19130 [33]#011train-error:0.17320#011validation-error:0.19040 [34]#011train-error:0.17233#011validation-error:0.18980 [35]#011train-error:0.17053#011validation-error:0.18930 [36]#011train-error:0.16900#011validation-error:0.18870 [37]#011train-error:0.16833#011validation-error:0.18800 [38]#011train-error:0.16687#011validation-error:0.18800 [39]#011train-error:0.16560#011validation-error:0.18710 [40]#011train-error:0.16347#011validation-error:0.18560 [41]#011train-error:0.16233#011validation-error:0.18590 [42]#011train-error:0.16040#011validation-error:0.18470 [43]#011train-error:0.16040#011validation-error:0.18380 [44]#011train-error:0.15967#011validation-error:0.18340 [45]#011train-error:0.15740#011validation-error:0.18270 [46]#011train-error:0.15553#011validation-error:0.18300 [47]#011train-error:0.15540#011validation-error:0.18380 [48]#011train-error:0.15527#011validation-error:0.18420 [49]#011train-error:0.15407#011validation-error:0.18430 [50]#011train-error:0.15227#011validation-error:0.18330 [51]#011train-error:0.15407#011validation-error:0.18320 [52]#011train-error:0.15387#011validation-error:0.18270 [53]#011train-error:0.15387#011validation-error:0.18300 [54]#011train-error:0.15273#011validation-error:0.18210 [55]#011train-error:0.15107#011validation-error:0.18130 [56]#011train-error:0.15013#011validation-error:0.18150 [57]#011train-error:0.14973#011validation-error:0.17990 [58]#011train-error:0.14900#011validation-error:0.18000 [59]#011train-error:0.14740#011validation-error:0.17740 [60]#011train-error:0.14600#011validation-error:0.17670 [61]#011train-error:0.14560#011validation-error:0.17590 [62]#011train-error:0.14460#011validation-error:0.17600 [63]#011train-error:0.14447#011validation-error:0.17520 [64]#011train-error:0.14447#011validation-error:0.17450 [65]#011train-error:0.14407#011validation-error:0.17390 [66]#011train-error:0.14353#011validation-error:0.17480 [67]#011train-error:0.14300#011validation-error:0.17390 [68]#011train-error:0.14207#011validation-error:0.17330 [69]#011train-error:0.14120#011validation-error:0.17400 [70]#011train-error:0.14013#011validation-error:0.17390 [71]#011train-error:0.13853#011validation-error:0.17320 [72]#011train-error:0.13787#011validation-error:0.17340 [73]#011train-error:0.13727#011validation-error:0.17260 [74]#011train-error:0.13693#011validation-error:0.17210 [75]#011train-error:0.13607#011validation-error:0.17250 [76]#011train-error:0.13533#011validation-error:0.17190 [77]#011train-error:0.13460#011validation-error:0.17110 [78]#011train-error:0.13427#011validation-error:0.17240 [79]#011train-error:0.13427#011validation-error:0.17200 [80]#011train-error:0.13260#011validation-error:0.17200 [81]#011train-error:0.13240#011validation-error:0.17130 [82]#011train-error:0.13227#011validation-error:0.17200 [83]#011train-error:0.13147#011validation-error:0.17220 [84]#011train-error:0.13147#011validation-error:0.17270 [85]#011train-error:0.13113#011validation-error:0.17230 [86]#011train-error:0.13147#011validation-error:0.17180 [87]#011train-error:0.13100#011validation-error:0.17360 2020-05-10 17:38:54 Uploading - Uploading generated training model 2020-05-10 17:38:54 Completed - Training job completed Training seconds: 174 Billable seconds: 174 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ......................[2020-05-10 17:47:39 +0000] [16] [INFO] Starting gunicorn 19.10.0 [2020-05-10 17:47:39 +0000] [16] [INFO] Listening at: unix:/tmp/gunicorn.sock (16) [2020-05-10 17:47:39 +0000] [16] [INFO] Using worker: gevent [2020-05-10 17:47:39 +0000] [23] [INFO] Booting worker with pid: 23 [2020-05-10 17:47:39 +0000] [24] [INFO] Booting worker with pid: 24 [2020-05-10 17:47:40 +0000] [25] [INFO] Booting worker with pid: 25 [2020-05-10 17:47:40 +0000] [29] [INFO] Booting worker with pid: 29 [2020-05-10:17:47:46:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [10/May/2020:17:47:46 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" [2020-05-10:17:47:46:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [10/May/2020:17:47:46 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-05-10:17:47:46:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [10/May/2020:17:47:46 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" [2020-05-10:17:47:46:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [10/May/2020:17:47:46 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" 2020-05-10T17:47:46.302:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-10:17:47:49:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:17:47:49:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:47:49:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:47:49:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:47:49:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:17:47:49:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:47:49:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:17:47:49:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:47:49:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:47:49:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:47:49:INFO] No GPUs detected (normal if no gpus installed) [2020-05-10:17:47:49:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:47:52 +0000] "POST /invocations HTTP/1.1" 200 12118 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:47:52 +0000] "POST /invocations HTTP/1.1" 200 12098 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:47:52 +0000] "POST /invocations HTTP/1.1" 200 12099 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:47:52 +0000] "POST /invocations HTTP/1.1" 200 12111 "-" "Go-http-client/1.1" [2020-05-10:17:47:53:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:47:52 +0000] "POST /invocations HTTP/1.1" 200 12118 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:47:52 +0000] "POST /invocations HTTP/1.1" 200 12098 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:47:52 +0000] "POST /invocations HTTP/1.1" 200 12099 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:47:52 +0000] "POST /invocations HTTP/1.1" 200 12111 "-" "Go-http-client/1.1" [2020-05-10:17:47:53:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:47:53:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:47:53:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:47:53:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:47:53:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:47:53:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:47:53:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:47:56 +0000] "POST /invocations HTTP/1.1" 200 12097 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:47:56 +0000] "POST /invocations HTTP/1.1" 200 12086 "-" "Go-http-client/1.1" [2020-05-10:17:47:56:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:47:56 +0000] "POST /invocations HTTP/1.1" 200 12103 "-" "Go-http-client/1.1" [2020-05-10:17:47:56:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:47:56 +0000] "POST /invocations HTTP/1.1" 200 12107 "-" "Go-http-client/1.1" [2020-05-10:17:47:56:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:47:56 +0000] "POST /invocations HTTP/1.1" 200 12097 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:47:56 +0000] "POST /invocations HTTP/1.1" 200 12086 "-" "Go-http-client/1.1" [2020-05-10:17:47:56:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:47:56 +0000] "POST /invocations HTTP/1.1" 200 12103 "-" "Go-http-client/1.1" [2020-05-10:17:47:56:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:47:56 +0000] "POST /invocations HTTP/1.1" 200 12107 "-" "Go-http-client/1.1" [2020-05-10:17:47:56:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:48:00:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:48:00:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:48:00:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:48:00:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:48:03 +0000] "POST /invocations HTTP/1.1" 200 12116 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:48:03 +0000] "POST /invocations HTTP/1.1" 200 12105 "-" "Go-http-client/1.1" [2020-05-10:17:48:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:48:04 +0000] "POST /invocations HTTP/1.1" 200 12115 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:48:03 +0000] "POST /invocations HTTP/1.1" 200 12116 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:48:03 +0000] "POST /invocations HTTP/1.1" 200 12105 "-" "Go-http-client/1.1" [2020-05-10:17:48:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:48:04 +0000] "POST /invocations HTTP/1.1" 200 12115 "-" "Go-http-client/1.1" [2020-05-10:17:48:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:48:04 +0000] "POST /invocations HTTP/1.1" 200 12126 "-" "Go-http-client/1.1" [2020-05-10:17:48:04:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:48:04:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:48:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:48:04 +0000] "POST /invocations HTTP/1.1" 200 12126 "-" "Go-http-client/1.1" [2020-05-10:17:48:04:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:48:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:48:07 +0000] "POST /invocations HTTP/1.1" 200 12086 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:48:07 +0000] "POST /invocations HTTP/1.1" 200 12107 "-" "Go-http-client/1.1" [2020-05-10:17:48:07:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:48:07:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:48:07 +0000] "POST /invocations HTTP/1.1" 200 12110 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:48:07 +0000] "POST /invocations HTTP/1.1" 200 12120 "-" "Go-http-client/1.1" [2020-05-10:17:48:08:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:48:07 +0000] "POST /invocations HTTP/1.1" 200 12086 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:48:07 +0000] "POST /invocations HTTP/1.1" 200 12107 "-" "Go-http-client/1.1" [2020-05-10:17:48:07:INFO] Determined delimiter of CSV input is ',' [2020-05-10:17:48:07:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [10/May/2020:17:48:07 +0000] "POST /invocations HTTP/1.1" 200 12110 "-" "Go-http-client/1.1" 169.254.255.130 - - [10/May/2020:17:48:07 +0000] "POST /invocations HTTP/1.1" 200 12120 "-" "Go-http-client/1.1" [2020-05-10:17:48:08:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/470.0 KiB (3.9 MiB/s) with 1 file(s) remaining Completed 470.0 KiB/470.0 KiB (6.9 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-053596391548/sagemaker-xgboost-2020-05-10-17-44-09-047/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) test_X = new_XV = None new_vocabulary = original_vocabulary = None vectorizer = new_vectorizer = None gn = None s3_new_input_train = None s3_new_input_validation = None del(predictions) vocabulary = None new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "IMDB-new-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName = endpoint_name, EndpointConfigName = new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(endpoint_name) ###Output -----------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code session.sagemaker_client.delete_endpoint(EndpointName = endpoint_name) ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output _____no_output_____ ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output _____no_output_____ ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output _____no_output_____ ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ................................2020-08-28T05:43:19.883:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-08-28 05:43:19 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-28 05:43:19 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-28 05:43:19 +0000] [1] [INFO] Using worker: gevent Arguments: serve [2020-08-28 05:43:19 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-28 05:43:19 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-28 05:43:19 +0000] [1] [INFO] Using worker: gevent [2020-08-28 05:43:19 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-28 05:43:19 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-28:05:43:19:INFO] Model loaded successfully for worker : 37 [2020-08-28 05:43:19 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-28:05:43:19:INFO] Model loaded successfully for worker : 38 [2020-08-28 05:43:19 +0000] [40] [INFO] Booting worker with pid: 40 [2020-08-28:05:43:20:INFO] Model loaded successfully for worker : 39 [2020-08-28:05:43:20:INFO] Model loaded successfully for worker : 40 [2020-08-28:05:43:20:INFO] Sniff delimiter as ',' [2020-08-28:05:43:20:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:20:INFO] Sniff delimiter as ',' [2020-08-28:05:43:20:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:20:INFO] Sniff delimiter as ',' [2020-08-28:05:43:20:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:20:INFO] Sniff delimiter as ',' [2020-08-28 05:43:19 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-28 05:43:19 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-28:05:43:19:INFO] Model loaded successfully for worker : 37 [2020-08-28 05:43:19 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-28:05:43:19:INFO] Model loaded successfully for worker : 38 [2020-08-28 05:43:19 +0000] [40] [INFO] Booting worker with pid: 40 [2020-08-28:05:43:20:INFO] Model loaded successfully for worker : 39 [2020-08-28:05:43:20:INFO] Model loaded successfully for worker : 40 [2020-08-28:05:43:20:INFO] Sniff delimiter as ',' [2020-08-28:05:43:20:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:20:INFO] Sniff delimiter as ',' [2020-08-28:05:43:20:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:20:INFO] Sniff delimiter as ',' [2020-08-28:05:43:20:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:20:INFO] Sniff delimiter as ',' [2020-08-28:05:43:20:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:20:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:22:INFO] Sniff delimiter as ',' [2020-08-28:05:43:22:INFO] Sniff delimiter as ',' [2020-08-28:05:43:22:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:22:INFO] Sniff delimiter as ',' [2020-08-28:05:43:22:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:22:INFO] Sniff delimiter as ',' [2020-08-28:05:43:22:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:22:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:22:INFO] Sniff delimiter as ',' [2020-08-28:05:43:22:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:22:INFO] Sniff delimiter as ',' [2020-08-28:05:43:22:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:23:INFO] Sniff delimiter as ',' [2020-08-28:05:43:23:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:23:INFO] Sniff delimiter as ',' [2020-08-28:05:43:23:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:25:INFO] Sniff delimiter as ',' [2020-08-28:05:43:25:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:25:INFO] Sniff delimiter as ',' [2020-08-28:05:43:25:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:25:INFO] Sniff delimiter as ',' [2020-08-28:05:43:25:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:25:INFO] Sniff delimiter as ',' [2020-08-28:05:43:25:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:25:INFO] Sniff delimiter as ',' [2020-08-28:05:43:25:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:25:INFO] Sniff delimiter as ',' [2020-08-28:05:43:25:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:25:INFO] Sniff delimiter as ',' [2020-08-28:05:43:25:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:25:INFO] Sniff delimiter as ',' [2020-08-28:05:43:25:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:27:INFO] Sniff delimiter as ',' [2020-08-28:05:43:27:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:27:INFO] Sniff delimiter as ',' [2020-08-28:05:43:27:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:27:INFO] Sniff delimiter as ',' [2020-08-28:05:43:27:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:27:INFO] Sniff delimiter as ',' [2020-08-28:05:43:27:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:27:INFO] Sniff delimiter as ',' [2020-08-28:05:43:27:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:27:INFO] Sniff delimiter as ',' [2020-08-28:05:43:27:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:27:INFO] Sniff delimiter as ',' [2020-08-28:05:43:27:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:27:INFO] Sniff delimiter as ',' [2020-08-28:05:43:27:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:29:INFO] Sniff delimiter as ',' [2020-08-28:05:43:29:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:30:INFO] Sniff delimiter as ',' [2020-08-28:05:43:30:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:29:INFO] Sniff delimiter as ',' [2020-08-28:05:43:29:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:30:INFO] Sniff delimiter as ',' [2020-08-28:05:43:30:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:30:INFO] Sniff delimiter as ',' [2020-08-28:05:43:30:INFO] Sniff delimiter as ',' [2020-08-28:05:43:30:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:30:INFO] Sniff delimiter as ',' [2020-08-28:05:43:30:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:30:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:30:INFO] Sniff delimiter as ',' [2020-08-28:05:43:30:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:32:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:32:INFO] Sniff delimiter as ',' [2020-08-28:05:43:32:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:32:INFO] Sniff delimiter as ',' [2020-08-28:05:43:32:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:32:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:32:INFO] Sniff delimiter as ',' [2020-08-28:05:43:32:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:32:INFO] Sniff delimiter as ',' [2020-08-28:05:43:32:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:34:INFO] Sniff delimiter as ',' [2020-08-28:05:43:34:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:35:INFO] Sniff delimiter as ',' [2020-08-28:05:43:34:INFO] Sniff delimiter as ',' [2020-08-28:05:43:34:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:35:INFO] Sniff delimiter as ',' [2020-08-28:05:43:35:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:35:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:35:INFO] Sniff delimiter as ',' [2020-08-28:05:43:35:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:35:INFO] Sniff delimiter as ',' [2020-08-28:05:43:35:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:35:INFO] Sniff delimiter as ',' [2020-08-28:05:43:35:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:35:INFO] Sniff delimiter as ',' [2020-08-28:05:43:35:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:37:INFO] Sniff delimiter as ',' [2020-08-28:05:43:37:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:37:INFO] Sniff delimiter as ',' [2020-08-28:05:43:37:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:37:INFO] Sniff delimiter as ',' [2020-08-28:05:43:37:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:37:INFO] Sniff delimiter as ',' [2020-08-28:05:43:37:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:37:INFO] Sniff delimiter as ',' [2020-08-28:05:43:37:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:37:INFO] Sniff delimiter as ',' [2020-08-28:05:43:37:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:37:INFO] Sniff delimiter as ',' [2020-08-28:05:43:37:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:37:INFO] Sniff delimiter as ',' [2020-08-28:05:43:37:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:39:INFO] Sniff delimiter as ',' [2020-08-28:05:43:39:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:39:INFO] Sniff delimiter as ',' [2020-08-28:05:43:39:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:40:INFO] Sniff delimiter as ',' [2020-08-28:05:43:40:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:39:INFO] Sniff delimiter as ',' [2020-08-28:05:43:39:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:39:INFO] Sniff delimiter as ',' [2020-08-28:05:43:39:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:40:INFO] Sniff delimiter as ',' [2020-08-28:05:43:40:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:40:INFO] Sniff delimiter as ',' [2020-08-28:05:43:40:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:40:INFO] Sniff delimiter as ',' [2020-08-28:05:43:40:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:42:INFO] Sniff delimiter as ',' [2020-08-28:05:43:42:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:42:INFO] Sniff delimiter as ',' [2020-08-28:05:43:42:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:42:INFO] Sniff delimiter as ',' [2020-08-28:05:43:42:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:42:INFO] Sniff delimiter as ',' [2020-08-28:05:43:42:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:42:INFO] Sniff delimiter as ',' [2020-08-28:05:43:42:INFO] Determined delimiter of CSV input is ',' [2020-08-28:05:43:42:INFO] Sniff delimiter as ',' [2020-08-28:05:43:42:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-ap-south-1-714138043953/xgboost-2020-08-28-05-38-16-002/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2020-08-28-05-24-16-718 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['memori', 'last', 'hunt', 'stuck', 'sinc', 'saw', '1956', '13', 'movi', 'far', 'ahead', 'other', 'time', 'address', 'treatment', 'nativ', 'environ', 'ever', 'present', 'contrast', 'short', 'long', 'term', 'effect', 'greed', 'relev', 'today', '1956', 'cinemagraph', 'discuss', 'utmost', 'depth', 'relev', 'top', 'set', 'beauti', 'cinematographi', 'excel', 'memori', 'movi', 'end', 'day', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:484: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'substanti', 'monti', 'weari', 'omin', 'vastli', 'hostil', 'epitom'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'drone', 'ingrid', 'weaker', 'banana', 'masterson', 'scarfac', 'cypher'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, role, train_instance_count=1, train_instance_type='ml.m4.xlarge', output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-08-28 05:56:59 Starting - Starting the training job... 2020-08-28 05:57:01 Starting - Launching requested ML instances...... 2020-08-28 05:58:23 Starting - Preparing the instances for training...... 2020-08-28 05:59:12 Downloading - Downloading input data... 2020-08-28 05:59:54 Training - Training image download completed. Training in progress..Arguments: train [2020-08-28:05:59:55:INFO] Running standalone xgboost training. [2020-08-28:05:59:55:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8474.9mb [2020-08-28:05:59:55:INFO] Determined delimiter of CSV input is ',' [05:59:55] S3DistributionType set as FullyReplicated [05:59:57] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-08-28:05:59:57:INFO] Determined delimiter of CSV input is ',' [05:59:57] S3DistributionType set as FullyReplicated [05:59:58] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [06:00:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 0 pruned nodes, max_depth=5 [0]#011train-error:0.3072#011validation-error:0.3191 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [06:00:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.291267#011validation-error:0.3016 [06:00:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 0 pruned nodes, max_depth=5 [2]#011train-error:0.291533#011validation-error:0.303 [06:00:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [3]#011train-error:0.287933#011validation-error:0.2994 [06:00:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.2794#011validation-error:0.2921 [06:00:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [5]#011train-error:0.272933#011validation-error:0.2877 [06:00:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [6]#011train-error:0.256267#011validation-error:0.269 [06:00:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.246133#011validation-error:0.2632 [06:00:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.2446#011validation-error:0.2595 [06:00:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [9]#011train-error:0.240667#011validation-error:0.2545 [06:00:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.2352#011validation-error:0.2508 [06:00:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [11]#011train-error:0.230467#011validation-error:0.247 [06:00:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [12]#011train-error:0.2248#011validation-error:0.2402 [06:00:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [13]#011train-error:0.221#011validation-error:0.2379 [06:00:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [14]#011train-error:0.217067#011validation-error:0.2355 [06:00:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [15]#011train-error:0.214067#011validation-error:0.2336 [06:00:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.210067#011validation-error:0.2313 [06:00:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [17]#011train-error:0.2088#011validation-error:0.2285 [06:00:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [18]#011train-error:0.206533#011validation-error:0.2253 [06:00:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [19]#011train-error:0.201867#011validation-error:0.2226 [06:00:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [20]#011train-error:0.200067#011validation-error:0.2204 [06:00:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [21]#011train-error:0.198333#011validation-error:0.2167 [06:00:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [22]#011train-error:0.193933#011validation-error:0.2151 [06:00:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.193#011validation-error:0.2129 [06:00:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [24]#011train-error:0.190533#011validation-error:0.2115 [06:00:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [25]#011train-error:0.189667#011validation-error:0.2106 [06:00:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [26]#011train-error:0.187867#011validation-error:0.2079 [06:00:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [27]#011train-error:0.187133#011validation-error:0.2068 [06:00:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [28]#011train-error:0.184867#011validation-error:0.2045 [06:00:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [29]#011train-error:0.1816#011validation-error:0.2037 [06:00:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.180267#011validation-error:0.2031 [06:00:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.1784#011validation-error:0.2026 [06:00:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 16 pruned nodes, max_depth=5 [32]#011train-error:0.178067#011validation-error:0.2019 [06:00:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [33]#011train-error:0.176733#011validation-error:0.2015 [06:00:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [34]#011train-error:0.174867#011validation-error:0.2009 [06:00:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [35]#011train-error:0.173333#011validation-error:0.199 [06:00:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.173133#011validation-error:0.199 [06:00:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [37]#011train-error:0.171533#011validation-error:0.1984 [06:00:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [38]#011train-error:0.1694#011validation-error:0.1965 [06:00:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [39]#011train-error:0.168333#011validation-error:0.1955 [06:00:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [40]#011train-error:0.166467#011validation-error:0.1942 [06:00:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [41]#011train-error:0.166#011validation-error:0.1933 [06:00:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [42]#011train-error:0.164#011validation-error:0.1918 [06:00:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [43]#011train-error:0.162933#011validation-error:0.1905 [06:00:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [44]#011train-error:0.162267#011validation-error:0.1894 [06:00:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [45]#011train-error:0.161667#011validation-error:0.1891 [06:01:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [46]#011train-error:0.1612#011validation-error:0.1895 [06:01:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [47]#011train-error:0.159333#011validation-error:0.1895 [06:01:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [48]#011train-error:0.1586#011validation-error:0.1895 [06:01:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [49]#011train-error:0.158#011validation-error:0.1891 [06:01:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [50]#011train-error:0.1566#011validation-error:0.1887 [06:01:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [51]#011train-error:0.1562#011validation-error:0.188 [06:01:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [52]#011train-error:0.155133#011validation-error:0.188 [06:01:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [53]#011train-error:0.154733#011validation-error:0.1857 [06:01:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [54]#011train-error:0.153933#011validation-error:0.1865 [06:01:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5 [55]#011train-error:0.153733#011validation-error:0.1857 [06:01:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [56]#011train-error:0.151933#011validation-error:0.1855 [06:01:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [57]#011train-error:0.151867#011validation-error:0.1844 [06:01:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [58]#011train-error:0.151333#011validation-error:0.1856 [06:01:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5 [59]#011train-error:0.150467#011validation-error:0.1848 [06:01:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [60]#011train-error:0.149333#011validation-error:0.1833 [06:01:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [61]#011train-error:0.148267#011validation-error:0.1822 [06:01:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [62]#011train-error:0.148267#011validation-error:0.1819 [06:01:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [63]#011train-error:0.147133#011validation-error:0.1822 [06:01:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [64]#011train-error:0.146133#011validation-error:0.1805 [06:01:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [65]#011train-error:0.146067#011validation-error:0.1804 [06:01:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [66]#011train-error:0.1456#011validation-error:0.1789 [06:01:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [67]#011train-error:0.144867#011validation-error:0.179 [06:01:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [68]#011train-error:0.144533#011validation-error:0.1788 [06:01:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5 [69]#011train-error:0.143667#011validation-error:0.1781 [06:01:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [70]#011train-error:0.142933#011validation-error:0.1789 [06:01:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [71]#011train-error:0.1422#011validation-error:0.1791 [06:01:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [72]#011train-error:0.141333#011validation-error:0.178 [06:01:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5 [73]#011train-error:0.141667#011validation-error:0.1769 [06:01:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [74]#011train-error:0.141067#011validation-error:0.1779 [06:01:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [75]#011train-error:0.1408#011validation-error:0.1774 [06:01:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [76]#011train-error:0.14#011validation-error:0.1771 [06:01:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [77]#011train-error:0.1398#011validation-error:0.1766 [06:01:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [78]#011train-error:0.139733#011validation-error:0.1771 [06:01:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [79]#011train-error:0.138533#011validation-error:0.1753 [06:01:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5 [80]#011train-error:0.137133#011validation-error:0.1763 [06:01:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 18 pruned nodes, max_depth=5 [81]#011train-error:0.135467#011validation-error:0.1752 [06:01:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [82]#011train-error:0.134733#011validation-error:0.1759 [06:01:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [83]#011train-error:0.134733#011validation-error:0.1762 [06:01:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5 [84]#011train-error:0.134133#011validation-error:0.1768 [06:01:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [85]#011train-error:0.134#011validation-error:0.1757 [06:01:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [86]#011train-error:0.1334#011validation-error:0.175 [06:01:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [87]#011train-error:0.133#011validation-error:0.1754 [06:01:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5 [88]#011train-error:0.1332#011validation-error:0.1757 [06:01:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [89]#011train-error:0.132867#011validation-error:0.1755 [06:01:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [90]#011train-error:0.1322#011validation-error:0.1751 [06:01:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [91]#011train-error:0.132#011validation-error:0.1751 [06:01:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [92]#011train-error:0.131333#011validation-error:0.1754 [06:02:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [93]#011train-error:0.130867#011validation-error:0.1749 [06:02:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [94]#011train-error:0.129867#011validation-error:0.1745 [06:02:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5 [95]#011train-error:0.129133#011validation-error:0.1745 [06:02:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [96]#011train-error:0.1296#011validation-error:0.1755 [06:02:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [97]#011train-error:0.128667#011validation-error:0.1752 [06:02:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [98]#011train-error:0.128933#011validation-error:0.1753 [06:02:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5 [99]#011train-error:0.1288#011validation-error:0.1751 [06:02:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [100]#011train-error:0.128067#011validation-error:0.1747 [06:02:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 16 pruned nodes, max_depth=5 [101]#011train-error:0.128133#011validation-error:0.1742 [06:02:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [102]#011train-error:0.1266#011validation-error:0.1749 [06:02:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5 [103]#011train-error:0.126333#011validation-error:0.1741 [06:02:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5 [104]#011train-error:0.1256#011validation-error:0.1743 [06:02:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [105]#011train-error:0.125533#011validation-error:0.1744 [06:02:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [106]#011train-error:0.125533#011validation-error:0.1745 [06:02:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [107]#011train-error:0.125067#011validation-error:0.1742 [06:02:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5 [108]#011train-error:0.124733#011validation-error:0.174 [06:02:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5 [109]#011train-error:0.124267#011validation-error:0.1744 [06:02:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [110]#011train-error:0.124467#011validation-error:0.1743 [06:02:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [111]#011train-error:0.1238#011validation-error:0.174 [06:02:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [112]#011train-error:0.122933#011validation-error:0.1749 [06:02:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [113]#011train-error:0.121933#011validation-error:0.1746 [06:02:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [114]#011train-error:0.1224#011validation-error:0.1743 [06:02:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [115]#011train-error:0.121467#011validation-error:0.1748 [06:02:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [116]#011train-error:0.121333#011validation-error:0.1758 [06:02:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [117]#011train-error:0.121133#011validation-error:0.1753 [06:02:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5 [118]#011train-error:0.120533#011validation-error:0.1755 Stopping. Best iteration: [108]#011train-error:0.124733#011validation-error:0.174  2020-08-28 06:02:44 Uploading - Uploading generated training model 2020-08-28 06:02:44 Completed - Training job completed Training seconds: 212 Billable seconds: 212 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ...........................Arguments: serve [2020-08-28 06:27:55 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-28 06:27:55 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-28 06:27:55 +0000] [1] [INFO] Using worker: gevent [2020-08-28 06:27:55 +0000] [36] [INFO] Booting worker with pid: 36 [2020-08-28 06:27:55 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-28:06:27:55:INFO] Model loaded successfully for worker : 36 [2020-08-28 06:27:55 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-28:06:27:55:INFO] Model loaded successfully for worker : 37 [2020-08-28 06:27:55 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-28:06:27:55:INFO] Model loaded successfully for worker : 38 [2020-08-28:06:27:56:INFO] Model loaded successfully for worker : 39 [2020-08-28:06:27:56:INFO] Sniff delimiter as ',' [2020-08-28:06:27:56:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:27:56:INFO] Sniff delimiter as ',' [2020-08-28:06:27:56:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:27:56:INFO] Sniff delimiter as ',' [2020-08-28:06:27:56:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:27:56:INFO] Sniff delimiter as ',' [2020-08-28:06:27:56:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:27:58:INFO] Sniff delimiter as ',' [2020-08-28:06:27:58:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:27:58:INFO] Sniff delimiter as ',' [2020-08-28:06:27:58:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:27:58:INFO] Sniff delimiter as ',' [2020-08-28:06:27:58:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:27:58:INFO] Sniff delimiter as ',' [2020-08-28:06:27:58:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:27:59:INFO] Sniff delimiter as ',' [2020-08-28:06:27:59:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:27:58:INFO] Sniff delimiter as ',' [2020-08-28:06:27:58:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:27:59:INFO] Sniff delimiter as ',' [2020-08-28:06:27:59:INFO] Determined delimiter of CSV input is ',' 2020-08-28T06:27:55.788:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-08-28:06:28:01:INFO] Sniff delimiter as ',' [2020-08-28:06:28:01:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:01:INFO] Sniff delimiter as ',' [2020-08-28:06:28:01:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:01:INFO] Sniff delimiter as ',' [2020-08-28:06:28:01:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:01:INFO] Sniff delimiter as ',' [2020-08-28:06:28:01:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:01:INFO] Sniff delimiter as ',' [2020-08-28:06:28:01:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:01:INFO] Sniff delimiter as ',' [2020-08-28:06:28:01:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:01:INFO] Sniff delimiter as ',' [2020-08-28:06:28:01:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:01:INFO] Sniff delimiter as ',' [2020-08-28:06:28:01:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:03:INFO] Sniff delimiter as ',' [2020-08-28:06:28:03:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:03:INFO] Sniff delimiter as ',' [2020-08-28:06:28:03:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:03:INFO] Sniff delimiter as ',' [2020-08-28:06:28:03:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:04:INFO] Sniff delimiter as ',' [2020-08-28:06:28:04:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:03:INFO] Sniff delimiter as ',' [2020-08-28:06:28:03:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:03:INFO] Sniff delimiter as ',' [2020-08-28:06:28:03:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:03:INFO] Sniff delimiter as ',' [2020-08-28:06:28:03:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:04:INFO] Sniff delimiter as ',' [2020-08-28:06:28:04:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:06:INFO] Sniff delimiter as ',' [2020-08-28:06:28:06:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:06:INFO] Sniff delimiter as ',' [2020-08-28:06:28:06:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:06:INFO] Sniff delimiter as ',' [2020-08-28:06:28:06:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:06:INFO] Sniff delimiter as ',' [2020-08-28:06:28:06:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:06:INFO] Sniff delimiter as ',' [2020-08-28:06:28:06:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:06:INFO] Sniff delimiter as ',' [2020-08-28:06:28:06:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:06:INFO] Sniff delimiter as ',' [2020-08-28:06:28:06:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:06:INFO] Sniff delimiter as ',' [2020-08-28:06:28:06:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:08:INFO] Sniff delimiter as ',' [2020-08-28:06:28:08:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:08:INFO] Sniff delimiter as ',' [2020-08-28:06:28:08:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:08:INFO] Sniff delimiter as ',' [2020-08-28:06:28:08:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:08:INFO] Sniff delimiter as ',' [2020-08-28:06:28:08:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:11:INFO] Sniff delimiter as ',' [2020-08-28:06:28:11:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:11:INFO] Sniff delimiter as ',' [2020-08-28:06:28:11:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:11:INFO] Sniff delimiter as ',' [2020-08-28:06:28:11:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:11:INFO] Sniff delimiter as ',' [2020-08-28:06:28:11:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:11:INFO] Sniff delimiter as ',' [2020-08-28:06:28:11:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:11:INFO] Sniff delimiter as ',' [2020-08-28:06:28:11:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:11:INFO] Sniff delimiter as ',' [2020-08-28:06:28:11:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:11:INFO] Sniff delimiter as ',' [2020-08-28:06:28:11:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:13:INFO] Sniff delimiter as ',' [2020-08-28:06:28:13:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:13:INFO] Sniff delimiter as ',' [2020-08-28:06:28:13:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:13:INFO] Sniff delimiter as ',' [2020-08-28:06:28:13:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:13:INFO] Sniff delimiter as ',' [2020-08-28:06:28:13:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:13:INFO] Sniff delimiter as ',' [2020-08-28:06:28:13:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:13:INFO] Sniff delimiter as ',' [2020-08-28:06:28:13:INFO] Sniff delimiter as ',' [2020-08-28:06:28:13:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:13:INFO] Sniff delimiter as ',' [2020-08-28:06:28:13:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:13:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:15:INFO] Sniff delimiter as ',' [2020-08-28:06:28:15:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:15:INFO] Sniff delimiter as ',' [2020-08-28:06:28:15:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:15:INFO] Sniff delimiter as ',' [2020-08-28:06:28:15:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:15:INFO] Sniff delimiter as ',' [2020-08-28:06:28:15:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:16:INFO] Sniff delimiter as ',' [2020-08-28:06:28:16:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:16:INFO] Sniff delimiter as ',' [2020-08-28:06:28:16:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:16:INFO] Sniff delimiter as ',' [2020-08-28:06:28:16:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:16:INFO] Sniff delimiter as ',' [2020-08-28:06:28:16:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:18:INFO] Sniff delimiter as ',' [2020-08-28:06:28:18:INFO] Sniff delimiter as ',' [2020-08-28:06:28:18:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:18:INFO] Sniff delimiter as ',' [2020-08-28:06:28:18:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:18:INFO] Sniff delimiter as ',' [2020-08-28:06:28:18:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:18:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:18:INFO] Sniff delimiter as ',' [2020-08-28:06:28:18:INFO] Determined delimiter of CSV input is ',' [2020-08-28:06:28:18:INFO] Sniff delimiter as ',' [2020-08-28:06:28:18:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-ap-south-1-714138043953/xgboost-2020-08-28-06-23-31-691/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endtime_config( EndpointConfigName=new_xgb_endpoint_config_name, ProductionVariants=[{ 'InstanceType': 'ml.m4.xlarge', 'InitialVariantWeight': 1, 'InitialInstanceCount': 1, 'ModelName': new_xgb_endpoint_config_name, 'VariantName': 'XGB_Model' }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-05-05 01:45:03-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 44.5MB/s in 1.8s 2020-05-05 01:45:05 (44.5 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Wrote preprocessed data to cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-05-05 02:14:01 Starting - Starting the training job... 2020-05-05 02:14:02 Starting - Launching requested ML instances...... 2020-05-05 02:15:05 Starting - Preparing the instances for training... 2020-05-05 02:15:53 Downloading - Downloading input data... 2020-05-05 02:16:29 Training - Downloading the training image... 2020-05-05 02:17:01 Uploading - Uploading generated training model 2020-05-05 02:17:01 Completed - Training job completed Arguments: train [2020-05-05:02:16:49:INFO] Running standalone xgboost training. [2020-05-05:02:16:49:INFO] File size need to be processed in the node: 9.14mb. Available memory size in the node: 8501.36mb [2020-05-05:02:16:49:INFO] Determined delimiter of CSV input is ',' [02:16:49] S3DistributionType set as FullyReplicated [02:16:49] 15000x178 matrix with 2670000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-05-05:02:16:49:INFO] Determined delimiter of CSV input is ',' [02:16:49] S3DistributionType set as FullyReplicated [02:16:49] 10000x178 matrix with 1780000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [02:16:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [0]#011train-error:0.481533#011validation-error:0.5101 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [02:16:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [1]#011train-error:0.468267#011validation-error:0.4989 [02:16:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 4 pruned nodes, max_depth=5 [2]#011train-error:0.451533#011validation-error:0.5042 [02:16:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 0 pruned nodes, max_depth=5 [3]#011train-error:0.444933#011validation-error:0.5045 [02:16:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5 [4]#011train-error:0.437133#011validation-error:0.5085 [02:16:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [5]#011train-error:0.432733#011validation-error:0.5066 [02:16:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.425467#011validation-error:0.5066 [02:16:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 0 pruned nodes, max_depth=5 [7]#011train-error:0.422733#011validation-error:0.5049 [02:16:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [8]#011train-error:0.421267#011validation-error:0.5029 [02:16:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.414533#011validation-error:0.5118 [02:16:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.411667#011validation-error:0.5092 [02:16:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 0 pruned nodes, max_depth=5 [11]#011train-error:0.407533#011validation-error:0.5105 Stopping. Best iteration: [1]#011train-error:0.468267#011validation-error:0.4989  Training seconds: 68 Billable seconds: 68 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .....................Arguments: serve [2020-05-05 02:20:36 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-05 02:20:36 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-05 02:20:36 +0000] [1] [INFO] Using worker: gevent [2020-05-05 02:20:36 +0000] [37] [INFO] Booting worker with pid: 37 [2020-05-05 02:20:36 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-05 02:20:36 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-05 02:20:36 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-05:02:20:36:INFO] Model loaded successfully for worker : 37 [2020-05-05:02:20:36:INFO] Model loaded successfully for worker : 39 [2020-05-05:02:20:36:INFO] Model loaded successfully for worker : 38 [2020-05-05:02:20:36:INFO] Model loaded successfully for worker : 40 [2020-05-05:02:20:58:INFO] Sniff delimiter as ',' [2020-05-05:02:20:58:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:20:58:INFO] Sniff delimiter as ',' [2020-05-05:02:20:58:INFO] Determined delimiter of CSV input is ',' 2020-05-05T02:20:57.602:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-west-2-202593872157/xgboost-2020-05-05-02-17-13-507/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x:x, tokenizer = lambda x:x ) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir,'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type ='Line') xgb_transformer.wait() ###Output ....................Arguments: serve [2020-05-05 02:24:39 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-05 02:24:39 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-05 02:24:39 +0000] [1] [INFO] Using worker: gevent [2020-05-05 02:24:39 +0000] [37] [INFO] Booting worker with pid: 37 [2020-05-05 02:24:39 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-05 02:24:39 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-05 02:24:39 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-05:02:24:39:INFO] Model loaded successfully for worker : 38 [2020-05-05:02:24:39:INFO] Model loaded successfully for worker : 37 [2020-05-05:02:24:39:INFO] Model loaded successfully for worker : 39 [2020-05-05:02:24:39:INFO] Model loaded successfully for worker : 40 [2020-05-05:02:24:56:INFO] Sniff delimiter as ',' [2020-05-05:02:24:56:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:24:56:INFO] Sniff delimiter as ',' [2020-05-05:02:24:56:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:24:56:INFO] Sniff delimiter as ',' [2020-05-05:02:24:56:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:24:56:INFO] Sniff delimiter as ',' [2020-05-05:02:24:56:INFO] Determined delimiter of CSV input is ',' 2020-05-05T02:24:55.465:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-west-2-202593872157/xgboost-2020-05-05-02-21-33-841/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-05-05-02-14-01-304 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['2', 'star', 'kay', 'franci', 'wonder', 'deserv', 'horribl', 'tripe', 'warner', 'bro', 'threw', 'way', 'two', 'prong', 'premis', 'movi', 'base', 'ridicul', 'unbeliev', 'extrem', 'kay', 'small', 'town', 'wife', 'mother', 'yearn', 'someth', 'bigger', 'want', 'actress', 'big', 'shot', 'actor', 'come', 'town', 'invit', 'kay', 'hotel', 'talk', 'possibl', 'kay', 'tell', 'husband', 'go', 'movi', 'hubbi', 'biddi', 'mother', 'put', 'bug', 'hubbi', 'ear', 'kay', 'truth', 'set', 'look', 'find', 'w', 'actor', 'hotel', 'talk', 'slug', 'guy', 'fall', 'rail', 'land', 'face', 'first', 'pond', 'lake', 'die', 'two', 'unbeliev', 'premis', 'upon', 'rest', 'movi', 'base', '1', 'judg', 'tell', 'juri', 'determin', 'man', 'die', 'head', 'went', 'water', 'must', 'find', 'hubbi', 'guilti', 'first', 'degre', 'murder', 'whaaaaa', 'think', 'slug', 'guy', 'fit', 'rage', 'would', 'count', 'manslaught', 'murder', '2', 'first', 'degre', 'murder', 'give', 'break', 'plot', 'requir', 'found', 'guilti', 'murder', '1', 'could', 'sent', 'prison', 'life', 'whatev', '2', 'hubbi', 'lawyer', 'convict', 'sentenc', 'tell', 'kay', 'fault', 'reason', 'gone', 'actor', 'room', 'husband', 'go', 'slug', 'guy', 'kill', 'tell', 'guilti', 'one', 'husband', 'nod', 'agre', 'hell', 'rest', 'movi', 'kay', 'tri', 'achiev', 'fame', 'money', 'order', 'get', 'husband', 'releas', 'prison', 'right', 'wrong', 'commit', 'caus', 'kill', 'actor', 'dude', 'first', 'place', 'even', 'go', 'review', 'movi', 'pain', 'four', 'year', 'earlier', 'pre', 'code', 'day', 'never', 'caught', 'kay', 'play', 'wimp', 'true', 'kay', 'franci', 'fashion', 'though', 'best', 'make', 'us', 'believ', 'woman', 'believ', 'charact', 'give', 'much', 'credit', 'tri', 'breath', 'life', 'credibl', 'thankless', 'role', 'charact', 'far', 'cri', 'pre', 'code', 'kay', 'role', 'real', 'life', 'spitfir', 'kay', 'franci', 'steer', 'way', 'clear', 'one', 'much', 'better', 'kay', 'franci', 'vehicl', 'person', 'experi', 'highli', 'recommend', 'mari', 'steven', 'md', 'jewel', 'robberi', 'also', 'good', 'dr', 'monica', 'one', 'way', 'passag', 'sure', 'great', 'kay', 'flick', 'well', 'mention', 'one', 'seen', 'recommend'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'\x97', 'í', ')', 'è', '®', '¦', '\\', 'à', '»', '…', 'O', 'Ä', 'â', 'ō', 'ç', '\x9e', '\x08', 'S', '\x80', "'", '₤', 'ö', 'ò', '=', '<', 'ü', 'N', 'W', '[', 'ì', '^', 'A', '"', 'î', 'y', '%', 'Õ', 'Ü', '`', 's', 'æ', ' ', '¾', 'M', 'ï', '|', 'J', 'á', '&', 'ã', 'ý', '¢', '*', 'K', 'ä', '\xad', 'P', 'Ã', '“', 'È', '\x10', '\uf0b7', ']', 'D', '_', 'a', '\x84', 'E', '\x96', '}', '¡', 'Ø', '.', '>', 'û', 'ð', 'ø', 'º', 'G', 'L', '!', '(', '·', '¿', 'm', '$', 'o', 'Y', 'Á', '#', '–', 'Z', '¤', 'V', 'U', '\x9a', 'å', 'ë', 'F', 'H', 'À', '{', 'I', ';', '\x8d', '-', 'Q', '’', 'T', '\t', '£', 'B', 'ß', '¨', '³', 'C', 'X', '\xa0', ',', '\x8e', '/', 'Å', '@', '½', 'ú', 'ù', '?', '\x95', 'R', '\x85', '~', '‘', '\x91', 'ñ', 'ó', '§', 'd', 't', 'é', 'Ê', '+', 'ô', '°', '«', '”', 'É', 'i', ':', '´', 'ê'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'percept', 'evolv', 'albert', '1974', 'magic', 'either', 'neighbor', 'spin', 'fade', 'convincingli', 'own', 'darren', 'aveng', 'wheel', 'lampoon', 'companion', 'eventu', 'horn', 'jennif', 'hint', 'job', 'hollow', 'mount', 'incompet', 'ok', 'flynn', 'mild', 'new', 'william', 'sidekick', 'cg', 'thoma', 'sore', 'voyag', 'pain', 'master', 'sum', 'nuclear', 'problem', 'swear', 'buff', 'fundament', 'mid', 'qualiti', 'alright', 'charm', 'theatr', 'timothi', 'behavior', 'smooth', 'rubbish', 'poor', 'univers', 'die', 'luke', 'book', 'frank', 'preach', 'elsewher', 'pickford', 'duval', 'eccentr', 'growth', 'price', 'sake', 'fallen', 'given', 'bob', 'bach', 'appear', 'fed', 'imperson', 'lengthi', 'choic', 'artifici', 'kazan', 'stiff', 'cypher', 'clean', 'pilot', 'typic', 'nolan', 'seri', 'acclaim', 'laid', '2000', 'bobbi', 'onlin', 'lex', 'roof', 'freak', 'ellen', 'obviou', 'cassavet', 'biblic', 'age', 'philip', 'carri', 'rita', 'witch', 'ashley', 'jerk', 'neither', 'sandra', 'bitter', 'psychopath', 'tie', 'go', 'exploit', 'peter', 'mission', 'entranc', 'soap', 'festiv', 'scare', 'la', 'gave', 'immedi', 'ramon', 'lovabl', 'assembl', 'charg', 'soccer', 'immigr', 'scottish', 'astronaut', 'whatev', 'disc', 'misguid', 'rehash', 'climat', 'lesli', 'vari', 'rang', 'suggest', 'gundam', 'day', 'rule', 'sustain', 'monti', 'backward', 'lang', 'sincer', 'hopper', 'urban', 'natur', 'instead', 'neill', 'marlon', 'tortur', 'unusu', 'signific', 'capit', 'deceas', 'reflect', 'renaiss', 'attempt', 'margin', 'rick', 'whole', 'human', 'banter', 'orson', 'shirley', 'horrifi', 'pre', 'simpson', 'wooden', 'laughabl', 'berlin', 'woodi', 'uniform', 'john', 'numer', 'journey', 'villain', 'sid', 'slaughter', 'prequel', 'obscur', 'balanc', 'restrict', 'surf', 'youth', 'exactli', 'uniqu', 'mormon', 'pauli', 'dismal', 'forest', 'safe', 'novak', 'imit', 'techniqu', 'beneath', 'monster', 'trash', 'coffe', 'ant', 'otherwis', 'hollywood', 'gloriou', 'listen', 'altogeth', 'interpret', '40', 'vein', 'dash', 'importantli', 'doom', 'mathieu', 'purchas', 'plastic', 'field', 'plagu', 'cari', 'smith', 'cheap', 'us', 'insipid', '2006', 'rare', 'burt', 'outing', 'drove', 'quickli', 'lol', 'lazi', 'contrast', 'comprehend', 'stun', 'jeff', 'randi', 'wound', 'frankli', 'femm', 'masterson', 'posit', 'inaccuraci', 'particip', 'bright', 'colleg', 'support', 'inform', 'charact', 'pursuit', 'process', 'ha', 'hall', 'tomorrow', 'increas', 'nope', 'boat', 'darker', 'gentl', 'els', 'vastli', 'guin', 'challeng', 'forgotten', 'countrysid', 'awe', 'coach', 'holi', 'saw', 'welcom', 'sunday', 'ingredi', 'burst', 'that', 'logan', 'grandmoth', 'jill', 'hank', '35', 'tea', 'butler', 'helen', 'toward', 'chao', 'car', 'shepherd', 'boot', 'behold', 'announc', 'edgi', 'mechan', 'hokey', 'ireland', 'stilt', 'seal', 'investig', 'simpli', 'choreograph', 'unbeliev', 'row', 'uncov', 'move', 'zoom', 'doubt', 'star', 'squar', 'forbidden', 'campbel', 'triangl', 'fatal', '1988', 'trap', 'soundtrack', 'depth', 'whatsoev', 'realiz', 'aesthet', '75', 'boil', 'type', 'convert', 'loui', 'lot', 'vivid', 'tend', 'male', 'defin', 'maci', 'potter', 'secur', 'unravel', '1971', 'home', 'late', 'pepper', 'pole', 'door', 'fast', 'swedish', 'complex', 'legend', 'dump', 'castl', 'later', 'opinion', 'butcher', 'inspector', 'bare', 'saga', 'seller', 'hamlet', 'work', 'cancer', 'local', 'masterpiec', 'afford', 'probabl', 'likewis', 'camcord', 'vietnam', 'winner', 'anchor', 'stale', '2005', 'onto', 'moment', 'keith', 'cartoonish', 'scari', 'water', 'anderson', 'disregard', 'yearn', 'reev', 'interestingli', 'emperor', 'homag', 'franchis', 'offic', 'wealthi', 'straightforward', 'photo', 'folk', 'extrem', 'springer', 'nephew', 'declin', 'uh', 'never', 'superior', 'splatter', 'carel', 'meryl', 'nicola', 'present', 'happili', 'danc', 'theater', 'gari', 'birthday', 'pervert', 'penn', 'red', 'disbelief', 'gone', 'knowledg', 'perpetu', 'despit', '1991', 'lend', 'fido', 'spent', 'hack', 'altern', 'friendli', 'monoton', 'hotel', 'spider', 'english', 'accuraci', 'trite', 'leo', 'bela', 'help', 'becom', 'gay', 'collector', 'hero', 'steven', 'alon', 'compromis', 'seemingli', 'tast', 'fenc', 'jo', 'johnson', 'sung', 'steer', 'impress', 'confess', 'dawn', 'pound', 'crystal', 'flame', 'fli', 'prais', 'ruth', 'wendi', 'antholog', 'deliveri', 'kline', 'thick', 'low', 'howev', 'barbara', 'rest', 'enjoy', 'pearl', 'earn', 'tendenc', 'demis', 'plant', 'pervers', 'laurel', 'conneri', 'unpleas', 'infect', 'depart', 'historian', 'grace', 'gunga', 'decept', 'atroc', 'pie', 'tear', 'cash', 'east', 'howard', 'mass', 'solid', 'jungl', 'ii', 'campi', 'worst', 'flashback', 'equal', 'combin', 'ago', 'tyler', 'session', 'controversi', 'lui', 'stuck', 'boredom', 'intellig', 'shout', 'mann', 'fri', 'sutherland', 'war', 'stiller', 'damag', 'slave', 'heroic', 'park', 'deep', 'defi', 'letter', 'charli', 'irish', 'er', 'region', 'faith', 'viewpoint', 'inept', 'stomach', 'beast', 'primarili', 'aforement', 'choke', 'topless', 'news', 'arab', 'fuller', 'alli', 'easier', 'activ', 'carl', 'introduct', 'eugen', 'meat', 'rapist', 'suitabl', 'advanc', 'transform', 'mayb', 'rochest', 'conveni', 'consist', 'mute', 'flawless', 'beyond', 'showdown', 'world', 'threw', 'blob', 'blackmail', 'april', 'anywher', 'verhoeven', 'swallow', 'creat', 'arrow', 'rebel', 'uncomfort', 'empti', 'finest', 'pocket', 'harm', 'iron', 'biographi', 'vain', 'nine', 'jon', 'stake', 'astound', 'drivel', 'horrend', 'con', 'content', 'thrill', 'golden', 'omen', 'brook', 'cemeteri', 'remot', 'necessari', 'kennedi', 'outstand', 'copi', 'chines', 'mom', 'vaniti', 'closet', 'alley', 'standout', 'slick', 'readi', 'lane', 'afternoon', 'embark', 'inappropri', 'normal', 'michel', 'laugh', 'tiger', 'redneck', 'laughter', 'gimmick', 'descent', 'bite', 'reign', 'chosen', 'duke', 'style', 'ignor', 'wive', 'uncut', 'aussi', 'accent', 'israel', 'paul', 'repeat', 'luxuri', 'jim', 'enlighten', 'bergman', 'ratso', 'sacrif', 'brand', 'class', 'averag', 'viewer', 'repress', 'deal', 'veteran', 'span', 'rough', 'video', 'marti', 'gross', 'long', 'bo', 'torment', 'roller', 'andrew', 'bleed', 'offens', 'attribut', 'though', 'up', 'sandler', 'promis', 'admit', 'annoy', 'leav', 'join', 'compliment', 'narrow', 'macabr', 'mst3k', 'prefer', 'multipl', 'stone', 'fish', 'person', 'cabl', 'melt', 'ladi', 'cinemat', 'add', 'devast', 'asleep', 'lowest', 'holiday', 'olli', 'speech', 'sam', 'sold', 'chri', 'somehow', 'ryan', 'eli', 'concentr', 'crazi', 'histor', 'major', 'gabriel', 'compass', 'molli', 'dragon', 'immers', 'pitch', 'build', 'fox', 'industri', 'blade', 'evan', 'layer', 'leon', 'vega', 'disney', 'includ', 'dislik', 'exhibit', 'tight', 'breakfast', 'finish', 'grew', 'frog', 'great', 'destroy', 'retir', 'patricia', 'humour', 'resolut', 'wash', 'affect', 'repli', 'rocket', 'partner', 'humbl', 'karloff', 'ran', 'put', 'truman', 'occasion', 'sloppi', 'goldsworthi', 'trauma', 'canyon', '1990', 'felt', 'drop', 'wretch', 'wire', 'astonish', 'distress', 'mainstream', 'thought', 'break', 'hippi', 'rival', 'wing', 'usual', 'abomin', 'hulk', 'justifi', 'eye', 'boy', 'sentinel', 'introduc', '30', 'built', 'correctli', 'dicken', 'trivia', 'glanc', 'inhabit', 'cook', 'galaxi', 'bloom', 'signal', 'zone', 'stupid', 'gather', 'tree', 'sirk', 'convolut', 'shortli', 'lift', 'orang', 'baldwin', 'acid', 'green', 'hammer', 'properli', 'silli', 'suicid', 'upper', 'businessman', 'angri', 'financi', 'mesmer', 'par', 'torn', 'comparison', 'account', 'rukh', 'wrap', 'creation', 'nevertheless', 'irrit', 'utter', 'speak', 'driven', 'streep', 'wagner', 'mytholog', 'phil', 'preserv', 'senat', 'fire', 'hawk', 'phoni', 'singer', 'beaten', 'threaten', '14', 'funer', 'miik', 'proper', 'piano', 'atroci', 'sentenc', 'photograph', 'elect', 'lacklust', 'exampl', 'know', 'mgm', 'chair', 'spectacular', 'unintent', 'brilliant', 'catch', 'lie', 'invit', 'opportun', 'captiv', 'wang', 'diana', 'spite', 'guarante', 'belli', 'princip', 'jack', 'murder', 'homeless', 'spiritu', 'dedic', 'colin', 'remak', 'suspicion', 'sequel', 'regard', 'talent', 'mainli', 'spoken', 'hbo', 'pari', 'memor', 'feel', 'coincid', 'wallac', 'rumor', 'edit', 'heart', 'follow', 'aristocrat', 'sympathet', 'compel', 'section', 'christian', 'sinist', 'egg', 'point', 'lead', 'pose', 'appal', 'insist', 'hk', 'fan', 'abort', 'habit', 'simultan', 'flee', 'replay', 'bakshi', 'vader', 'repuls', 'whore', 'prior', 'lifetim', 'convict', 'amazon', 'brando', 'eastern', 'unsuspect', 'accid', 'leader', 'civil', 'space', 'moor', 'premis', 'flash', 'chest', 'vital', 'profit', 'somewhat', 'occupi', 'watch', 'sexual', 'gerard', 'miracl', 'id', 'slide', 'entir', 'hart', 'creatur', 'whose', 'restor', 'appar', 'chill', 'cape', 'idea', 'broken', 'print', 'miss', 'voic', 'comic', 'enlist', 'goldblum', 'lush', 'cheesi', 'heat', 'creepi', 'limit', 'solo', 'fist', 'worthi', 'power', 'akin', 'hunter', 'bachelor', 'offer', 'dreck', 'campaign', 'spade', 'paradis', 'ed', 'elm', 'pray', 'dust', 'jake', 'repres', 'frighten', 'trio', 'subtleti', 'jess', 'fifti', 'edg', 'sweat', 'foreign', 'scientist', 'atmospher', 'entitl', 'enabl', 'biggest', 'european', 'known', 'simpl', 'constitut', 'handicap', 'kenneth', 'secondli', 'intent', 'summari', 'dutch', 'contrari', 'we', 'anyth', 'item', 'jade', 'examin', 'moe', 'month', 'kirk', 'epic', 'factori', 'giant', 'belt', 'rivet', 'wick', 'pronounc', 'encourag', 'homicid', 'wig', 'roll', 'recommend', 'luka', 'frustrat', 'admir', 'denni', 'emphas', 'genet', 'stylish', 'clau', 'weak', 'everywher', 'around', 'truth', 'reach', 'figur', 'darn', 'england', '95', 'hire', 'bro', 'hang', 'fond', 'bradi', 'rome', 'pregnant', 'surgeri', 'raw', 'safeti', 'illeg', 'shove', 'part', 'info', 'british', 'net', 'sing', 'stimul', 'fund', 'tall', 'abraham', 'absenc', 'insult', 'clint', 'restaur', 'pack', 'paint', 'accident', '4th', 'pokemon', 'happi', 'unconvinc', 'disgrac', 'mari', 'applaud', 'beauti', 'fourth', 'harmless', 'graini', 'leigh', 'strength', 'dracula', 'goal', 'whoever', 'warmth', 'jewel', 'murphi', 'babe', 'share', 'musician', 'lo', 'wealth', 'wannab', 'geek', 'encount', 'slash', 'terrorist', 'albeit', 'shatter', 'treat', 'vincent', 'reli', 'tarzan', 'vulner', '73', 'someth', 'ceremoni', 'persuad', 'clash', 'streisand', 'franc', 'symbol', 'inner', 'championship', 'confront', 'men', 'nyc', 'al', 'hilari', 'reson', 'pet', 'apolog', 'unreal', 'lectur', 'freeman', 'carter', 'lloyd', 'deeper', 'bent', 'accept', 'engross', 'format', 'conclus', 'paltrow', 'preston', 'drug', 'sudden', 'technic', 'fun', 'music', 'northern', 'comput', 'neck', 'dirti', 'reel', 'shortcom', 'cardboard', 'chemistri', 'associ', 'upon', 'lay', 'vile', 'identifi', 'akshay', 'matthew', 'mitch', 'psychot', 'escap', 'design', 'shoulder', 'suit', 'eyr', 'weapon', 'christi', 'virgin', 'japan', 'forgiv', 'desert', 'alien', 'hoffman', 'justic', 'visitor', 'ann', 'crow', 'snatch', 'ebert', 'pad', 'uwe', 'cerebr', 'gift', 'nonetheless', 'stroke', '1930', 'chaplin', 'piec', 'steadi', 'dri', 'elvi', 'date', 'cher', 'member', 'reason', 'viciou', 'destruct', 'inflict', 'undead', 'breath', 'unrel', 'experiment', 'repris', 'unsatisfi', 'whale', 'commentari', 'remain', 'throat', 'dane', 'econom', 'stuart', 'deepli', 'mayhem', 'websit', 'flower', 'comed', '1st', 'cute', 'eleven', 'matrix', 'shade', 'float', 'fall', 'princess', 'corni', 'dreari', 'everyday', 'supposedli', 'band', 'man', 'act', 'corbett', 'five', 'affair', 'roar', 'boston', 'seen', 'dubiou', 'michael', 'sacrific', 'liu', 'wear', 'oblig', 'mix', 'fairi', 'compet', 'wield', 'stock', 'awar', 'slightli', 'brosnan', 'award', 'someon', 'baker', 'sky', 'beach', 'excit', 'social', 'predict', 'non', 'el', 'disord', 'ground', 'open', 'sell', 'next', 'mall', 'hidden', 'sheer', 'radiat', 'andi', 'tank', 'awkward', 'horrif', 'quot', 'away', 'map', 'lindsay', 'inferior', 'outlin', 'pretens', 'sink', 'unexpectedli', 'ethan', 'wind', 'iran', 'illus', 'visit', 'uninspir', 'winchest', 'artsi', 'note', 'hundr', 'gut', 'warn', 'bride', 'unfair', 'facial', 'singl', 'heard', 'bastard', 'geniu', 'head', 'token', 'warm', 'revel', 'sort', 'bronson', 'crew', 'eve', 'fanat', 'propheci', 'aspect', 'seldom', 'mildr', 'tag', 'inspir', 'cream', 'jazz', 'crook', 'prey', 'craven', 'represent', 'dire', 'sensit', 'templ', 'bowl', 'lesbian', 'past', 'excus', 'angela', 'name', 'penni', 'far', 'joseph', 'warner', 'intend', 'grown', 'photographi', 'recognis', 'cage', 'quaid', 'rot', 'stan', 'taboo', 'breast', 'eight', 'outrag', 'sabrina', 'glad', '25', 'timeless', 'wander', 'phrase', 'regular', 'blood', 'teach', 'dandi', 'ish', 'appeal', 'extent', 'spawn', 'missil', 'nod', 'shred', 'cancel', 'undermin', 'pretti', 'steal', 'lean', 'lisa', 'random', 'jane', 'floor', 'anticip', 'primit', 'melodi', 'rid', 'gruesom', 'memori', 'sergeant', 'counterpart', 'gasp', 'fifth', 'rural', 'eat', 'remark', '1973', 'unforgett', 'linear', 'blah', 'sharon', 'strongest', 'suppli', 'choos', 'bill', 'substanc', 'updat', 'hour', 'continu', 'charl', 'religion', 'vote', 'code', 'poignant', 'small', 'wake', 'lover', 'palac', 'henc', 'sale', 'overdon', 'roman', 'divid', 'bodi', 'rubi', 'poetri', 'portion', 'punk', 'ensur', 'window', 'sooner', 'rose', 'young', 'perceiv', 'clown', 'guess', 'ad', 'insid', 'christin', 'partial', 'rain', 'fault', 'bike', 'axe', 'wife', 'pop', 'johnni', 'tunnel', 'care', 'wild', 'drip', 'ludicr', 'prepar', 'novel', 'purpl', 'magazin', 'sean', 'deed', 'disappoint', 'night', 'kinda', 'strip', 'alreadi', 'coaster', '10', 'cattl', 'keep', 'provok', 'awhil', 'secretli', 'elev', 'fairli', 'lyric', 'brendan', 'feat', 'notion', 'ms', 'anim', 'complaint', 'edi', 'preposter', 'outlaw', 'mason', 'noir', 'domin', 'poorli', 'till', 'condit', 'corrupt', 'princ', 'characterist', 'technolog', 'cope', 'cring', 'shaki', 'thirti', 'enforc', 'cowboy', 'toss', 'rotten', 'funni', 'dear', 'alexand', 'oddli', 'paper', 'mess', 'heston', 'behav', 'basic', 'davi', 'pink', 'smack', 'passag', 'contact', 'frontal', 'garbag', 'childish', 'maggi', 'web', 'walk', 'glorifi', 'left', 'von', 'reed', 'sleaz', 'connect', 'karl', 'chavez', '1986', 'pioneer', 'actor', 'sullivan', 'explain', 'eager', 'nick', 'gandhi', 'whack', 'kane', '00', 'appli', 'much', 'sitcom', 'skit', 'america', 'sweet', 'hara', 'concern', 'overlook', 'scandal', 'carlito', 'luck', 'cri', 'wave', 'today', 'cameron', 'infam', 'restrain', 'suprem', 'cagney', 'hunt', 'ultra', 'divin', 'mention', 'authent', 'porter', 'tame', 'dazzl', 'revolt', 'sick', 'carradin', 'bud', 'origin', 'septemb', 'confid', 'obnoxi', 'reynold', 'boob', 'teas', 'tower', 'unfortun', 'comedi', 'huston', 'instruct', 'cannon', 'preciou', 'luci', 'pete', 'portray', 'dimens', 'poetic', 'kent', 'direct', 'thoroughli', 'nativ', 'respond', 'snl', 'brian', 'pig', 'spare', 'forth', 'premier', '15', 'clich', 'grade', 'contribut', 'out', 'psycholog', 'longer', 'witti', 'dealer', 'group', 'bless', 'ruthless', 'forgett', 'arrog', 'smile', 'middl', '1995', 'mundan', 'wwii', 'suffic', '22', 'shakespear', '1977', 'mcqueen', 'greg', 'god', 'board', 'nervou', 'bomb', 'tap', 'antagonist', 'subplot', 'simmon', 'winter', 'ingeni', 'centuri', 'cush', 'interrupt', 'kid', 'mini', 'clinic', 'closer', 'state', 'truli', 'betray', 'hard', 'roy', 'unwatch', 'dive', 'brain', 'fake', 'bump', 'casper', 'distant', 'particular', 'jare', 'bell', 'behaviour', 'thank', 'todd', 'depend', 'sketch', 'dee', 'ace', 'york', 'compris', 'definit', 'plenti', 'firm', 'close', 'farmer', 'view', 'togeth', 'select', 'fear', 'programm', 'yesterday', 'satan', 'famou', 'hear', 'navi', 'kitchen', 'literari', 'properti', 'thu', 'punch', 'marin', 'narrat', 'badli', 'isabel', 'finger', 'implic', 'condemn', 'scientif', 'wors', 'remad', 'nightmar', 'nose', 'coher', 'crocodil', 'nuanc', 'scariest', 'damn', 'spirit', 'bridg', 'monument', 'imageri', 'parallel', 'miyazaki', 'program', 'erot', 'retriev', '1979', 'suspens', 'bunni', '13', 'recogniz', 'govern', 'attack', 'need', 'throughout', 'kidman', 'corman', 'www', 'robinson', 'neurot', 'ethnic', 'bullet', 'aid', 'corn', 'flat', 'sexi', 'alcohol', 'accomplish', 'shoe', 'appropri', 'chop', 'televis', 'mum', 'drink', 'grave', 'lone', 'runner', 'sister', 'drum', 'lumet', 'acknowledg', 'provoc', 'gal', 'per', 'victor', 'fbi', 'fortun', 'warrant', 'vengeanc', 'alvin', 'base', 'horror', 'fat', 'charlott', 'volum', 'costum', 'sneak', 'troubl', 'nice', 'except', 'reveng', 'clueless', 'multi', 'transcend', 'lili', 'tail', 'tim', 'constantli', 'monologu', 'sequenc', 'disturb', 'samurai', 'tick', 'wax', 'andr', 'order', 'endear', 'patient', 'favor', 'whip', 'segment', 'fluff', 'brutal', 'dirt', 'stink', 'theatric', 'real', 'meanwhil', 'trek', 'directli', 'orphan', 'track', 'triumph', 'cow', 'shame', 'psych', 'routin', 'mail', 'employ', 'one', '1936', '2001', 'channel', 'transfer', 'shed', 'learn', 'sparkl', 'hatr', 'stalk', 'none', 'bait', 'simon', 'inconsist', 'mill', 'fulci', 'passion', 'werewolf', 'clumsi', 'toy', 'north', 'surprisingli', 'light', 'mon', 'preced', 'derang', 'decis', 'tri', 'aspir', 'jonathan', 'italian', 'doll', 'weav', 'joan', 'stronger', 'caus', 'profession', 'walter', 'inmat', 'uncl', 'room', 'beard', 'ladder', 'epitom', 'santa', 'intern', 'mental', 'initi', 'fate', 'dig', 'flaw', 'difficulti', 'soft', 'determin', 'harri', 'physic', 'daili', 'greedi', 'echo', 'oppon', 'rooney', 'aliv', 'scarecrow', 'policeman', 'unnecessari', 'expect', 'devic', 'hugh', 'ape', 'polit', 'evil', 'friend', 'eleg', 'senseless', 'ah', 'event', 'dollar', 'realiti', 'cabin', 'babi', 'pretenti', 'obligatori', 'express', 'wtf', 'alic', 'truck', 'leather', 'sourc', 'audienc', 'week', 'ass', 'aggress', 'heal', 'stress', 'brood', 'soon', 'paid', 'holli', 'collabor', 'leonard', 'line', 'bed', 'explan', '24', 'glimps', 'black', 'fay', 'fill', 'circl', 'propaganda', 'eas', 'accur', 'within', 'jewish', 'dan', 'freez', 'mixtur', 'conscienc', 'deem', 'einstein', 'betti', 'wrestl', 'feminin', 'moodi', 'son', 'also', 'angst', 'unabl', 'reunit', 'matur', 'banana', 'yellow', 'jail', 'worth', 'boll', 'mislead', 'sent', 'exterior', 'accord', 'possibl', 'ensu', 'potenti', 'ident', 'spock', 'watson', 'uneven', 'jason', 'demon', 'sibl', 'captur', 'lousi', 'basi', 'ham', 'arm', 'drown', 'zizek', 'buri', 'pirat', 'imdb', 'ye', 'hartley', 'persona', 'strongli', 'store', 'display', 'assum', 'bu', 'gangster', 'amateurish', 'degrad', 'sensat', 'reaction', 'heap', 'suspici', 'conquer', 'categori', 'box', 'alan', 'motorcycl', 'superfici', 'joker', 'devot', 'fragil', 'complain', 'pc', 'sorrow', 'facil', 'grinch', 'germani', 'shake', 'bat', 'robbin', 'lee', 'tack', 'crawl', 'eva', 'debut', 'trust', 'result', 'love', 'plod', 'chees', 'lou', 'invest', 'chicago', 'hesit', 'sunni', 'border', 'smell', 'secret', 'melodramat', 'contriv', 'educ', 'confirm', 'specif', 'mad', 'knock', 'sentiment', 'wrench', 'documentari', 'surround', 'carey', 'toni', 'bumbl', 'cyborg', 'less', 'terrifi', 'thompson', 'delight', 'blockbust', 'newli', 'extraordinari', 'labor', 'franco', 'touch', 'theme', 'religi', 'grate', 'jackson', 'bitch', 'school', 'jessica', 'sidewalk', 'plan', 'hook', 'hide', 'kurt', 'fascist', 'im', 'white', 'idol', 'cinema', 'key', 'eleph', 'desper', 'imagin', 'modern', 'prison', 'warren', 'scoobi', 'via', 'harold', 'ought', 'pathet', 'wont', 'victori', 'hop', 'swing', 'report', 'martian', '100', 'forev', 'downhil', 'background', 'barri', 'closest', 'absorb', 'resolv', 'destini', 'rais', 'craig', 'claim', 'powel', 'alfr', 'splendid', 'innoc', 'surreal', 'made', 'suspend', 'briefli', 'ross', 'retard', 'enthusiasm', 'cent', 'texa', 'sophi', 'control', 'romant', 'shirt', 'appl', 'statement', 'reduc', 'electr', 'lynch', 'sue', 'casual', 'crime', 'said', 'older', 'final', 'ambiti', 'estat', 'vehicl', 'global', 'polic', 'guilt', 'revolutionari', 'elit', 'pant', 'versu', 'ocean', 'hudson', 'wholli', 'propos', 'stinker', 'explos', 'parson', 'walt', 'mexican', 'camera', 'nineti', 'knee', 'societi', 'moder', 'realism', 'marshal', 'feast', 'journalist', 'sometim', 'bulli', 'unhappi', 'candid', 'instanc', 'joey', 'nina', 'donna', 'fog', 'way', 'action', 'shield', 'crash', 'gina', 'sole', 'okay', 'woman', 'australia', 'technicolor', 'overact', 'broke', 'scheme', 'vomit', 'enhanc', 'tube', 'wow', 'sun', 'reject', 'victim', 'kiddi', 'research', 'failur', 'could', 'corps', 'kay', 'brief', 'spice', 'mous', 'wore', 'blond', 'guid', 'christ', 'last', 'georg', 'obvious', 'greek', 'rambl', 'market', 'hous', 'crown', 'sassi', 'road', 'peak', 'nathan', 'bargain', 'daddi', 'francisco', 'vagu', 'captain', 'divorc', 'teacher', 'suddenli', 'innov', 'dim', 'gordon', 'lose', 'win', 'tune', 'special', 'crucial', 'knife', 'ford', 'sit', 'effici', 'brad', 'senior', 'somebodi', 'tribut', 'flight', 'backdrop', 'lab', 'taylor', 'elimin', 'team', 'affleck', 'adult', 'render', 'elvira', 'revolv', 'mccoy', 'mexico', 'detect', 'crush', 'languag', 'latter', 'ie', 'friendship', 'margaret', 'glamor', 'ridicul', 'palanc', 'launch', '1994', 'recal', 'hum', 'librari', 'thunderbird', 'muslim', 'garden', 'measur', 'atlanti', 'buffalo', 'magician', 'period', 'realist', 'dramat', 'jami', 'fit', 'forgiven', 'argument', 'synopsi', 'juvenil', 'film', 'sugar', 'pacif', 'father', 'chess', 'lower', 'jodi', 'impact', 'hackney', 'robin', 'carmen', 'loi', 'royal', 'sat', 'kim', 'franci', 'seem', 'cannib', 'polanski', 'politician', 'sadist', 'ritual', 'poe', 'pen', 'receiv', 'clark', 'explor', 'trip', 'usa', 'alongsid', 'widescreen', '90', 'soprano', 'fine', '1996', 'start', 'hit', 'due', 'town', 'chuck', 'gestur', 'audit', 'studio', 'spi', 'kill', 'huh', 'deserv', 'ian', 'network', 'bet', 'hostil', 'former', 'firstli', 'rendit', 'russel', '3rd', 'pot', 'jean', 'meyer', 'movement', 'fail', 'cheer', 'simplist', 'peril', 'eastwood', 'pride', 'brother', 'pictur', 'mclaglen', 'guest', 'context', 'game', 'grab', 'poke', 'outsid', 'develop', 'jedi', 'stolen', 'snow', 'appreci', 'french', 'taken', 'page', 'make', 'larger', 'awak', 'ninja', 'loretta', 'struggl', 'incident', 'feminist', 'rage', 'grip', 'explicit', 'japanes', 'effort', 'dad', '19th', 'excess', 'ventur', 'loath', 'kelli', 'citi', 'protagonist', 'profan', 'cliffhang', 'get', 'expand', '1940', 'slapstick', 'loser', 'gilbert', 'distinct', 'accompani', 'tenant', 'graduat', 'olivi', 'bin', 'kansa', 'garner', 'invad', 'scriptwrit', 'midnight', 'character', 'click', 'understood', 'canada', 'pun', 'newer', 'object', 'scale', 'tax', 'heroin', 'haunt', 'del', 'ken', 'femal', 'scoop', 'crimin', 'chose', 'cain', 'bow', 'max', 'mummi', 'anna', 'sport', 'convent', 'barn', 'miseri', 'gritti', 'flirt', 'estrang', 'macho', 'dick', 'public', 'soup', 'storm', 'twice', 'client', 'battl', 'charisma', 'meaningless', 'brillianc', 'drake', 'resourc', 'jr', 'madonna', 'unattract', 'return', 'firmli', 'hardli', 'verg', 'bori', 'barrel', 'anton', 'stanwyck', 'goldberg', 'ball', 'loyalti', 'two', 'tini', 'nurs', 'inexplic', 'catchi', 'oliv', 'isol', 'sox', 'trade', 'slight', 'grief', 'paranoia', 'superman', 'credibl', 'approv', 'deliber', 'racial', 'jaw', 'saturday', 'blur', 'descript', 'quick', 'obtain', 'strike', 'em', 'mindless', 'equip', 'yell', 'tediou', 'robot', 'skeptic', 'flag', 'prevent', 'spiral', 'drift', 'clara', 'lucil', 'crisi', 'western', 'still', 'essenc', 'dvd', 'geni', 'slap', 'sucker', '11', 'judi', 'artwork', 'peopl', 'pointless', 'commend', 'pitt', 'brooklyn', 'bush', 'embrac', 'parti', 'austen', 'separ', 'minimum', 'departur', 'site', '2008', 'gun', 'flesh', 'difficult', 'crowd', 'collaps', 'structur', 'charlton', 'angel', 'mode', 'shell', 'health', 'record', 'danish', 'gray', 'bother', 'morbid', 'root', 'instal', 'richard', 'chuckl', 'say', 'curs', 'like', 'document', 'bit', 'tabl', 'diari', 'lesser', 'recreat', 'owner', 'fleet', 'wildli', 'wisdom', 'conrad', 'solut', 'lavish', 'matt', 'read', 'ban', 'priest', 'invas', 'consid', 'execut', 'spend', 'judgment', 'puzzl', 'claustrophob', 'abund', 'observ', 'approach', 'intrigu', 'greed', 'zero', 'hallucin', 'boyfriend', 'nail', 'bear', 'static', 'maker', 'gratuit', 'summar', 'valu', 'proce', 'verbal', 'mediocr', 'contempl', 'calib', 'survivor', 'fx', 'reader', 'clone', 'scott', 'russian', 'cours', 'american', 'doc', 'li', 'discoveri', 'pulp', 'among', 'eeri', 'incomprehens', 'chief', 'suck', 'gener', 'cole', 'pamela', 'empathi', 'spread', 'contract', 'pace', 'thru', 'harrison', 'incid', 'illog', 'discov', 'reward', 'hopelessli', 'profess', 'steel', 'pick', 'tad', 'befriend', 'proport', 'somewher', 'dixon', 'nut', 'repetit', 'center', 'quit', 'parent', 'richardson', 'stick', 'blame', 'predat', 'rear', 'extens', 'hyde', 'regist', 'degre', 'vacat', 'phantom', 'airport', 'omin', 'julia', 'scorses', 'honestli', 'straight', 'conspiraci', 'ador', 'taxi', 'vignett', 'servic', 'ration', 'crawford', 'worthless', 'quinn', 'disastr', 'mayor', 'fright', 'exist', 'oscar', 'redempt', 'mankind', 'eighti', 'godfath', 'rap', 'hamilton', 'norm', 'luckili', 'certain', 'storylin', 'melodrama', 'salman', 'guitar', 'underworld', 'offend', 'jan', 'flashi', 'remov', 'jone', 'target', 'gain', 'fool', 'exercis', 'futur', 'bernard', '1997', 'hungri', 'mood', 'contain', 'dead', 'han', 'slug', 'rank', 'provid', 'courag', '80', 'allen', 'conclud', 'yard', 'case', 'tommi', 'tribe', 'inject', 'creativ', 'gore', 'wipe', 'grin', 'stair', 'seduc', 'candi', 'electron', 'mile', 'tv', 'newman', 'doctor', 'aborigin', 'gonna', 'twenti', 'pure', 'mine', 'give', 'privat', 'error', 'situat', 'portrait', 'widmark', 'moreov', 'ruin', 'halloween', 'circumst', 'directori', 'randolph', 'philosophi', 'nostalg', '1981', 'cube', 'loyal', 'theori', 'pixar', 'lip', 'racist', 'likabl', 'effect', 'prejudic', 'literatur', 'fair', 'mysteri', 'cannot', 'manhattan', 'astair', 'keaton', 'excel', 'reminisc', 'write', 'underr', 'press', 'de', 'bondag', 'khan', 'arthur', 'talk', 'heartfelt', 'rel', 'mutual', 'command', 'liberti', 'ideal', 'necessarili', 'chanc', 'bate', 'tongu', 'tell', 'guilti', 'tone', '2007', 'outcom', 'popcorn', 'racism', 'vibrant', 'address', 'neatli', 'beer', 'butt', 'marion', 'honest', 'desir', 'word', 'norman', 'cusack', 'domest', 'insur', 'ear', 'mr', 'miniseri', 'stay', 'thin', 'meant', 'bedroom', 'mortal', 'wardrob', 'resembl', 'huge', 'energet', 'inclus', 'extend', 'chan', 'iraq', 'miami', 'cave', 'gender', 'rider', 'emot', 'awesom', 'romero', 'cox', 'terri', 'respons', 'giallo', 'resid', 'pressur', 'curios', 'mock', 'tacki', 'label', 'bore', 'suspect', 'incred', 'motion', 'pleasantli', 'rob', 'pin', 'risk', 'terror', 'patienc', 'silver', 'fascin', 'eddi', 'wacki', 'tomato', 'acquaint', 'glow', 'wwe', '1945', 'beat', 'focus', 'variat', 'nun', 'utterli', 'shine', 'foul', 'chew', 'fantasi', 'cb', 'evelyn', 'ben', 'weaker', 'louis', 'dudley', 'gillian', 'coast', 'montana', 'life', 'mansion', 'divers', 'miser', 'allow', 'string', 'level', 'behind', 'presum', 'accus', 'financ', 'tripl', 'dose', 'southern', 'housewif', 'kingdom', 'detract', 'mother', 'steam', 'inde', 'entri', 'commun', 'kept', 'crack', 'tokyo', 'africa', 'bland', 'plu', 'think', 'cinematographi', 'dude', 'trial', '20th', 'bone', 'career', 'abus', 'overal', 'wayn', 'impli', 'everybodi', 'broad', 'king', 'blair', 'trigger', 'trace', 'prove', 'antonioni', 'drew', 'contemporari', 'similarli', 'columbo', 'alik', 'guy', 'filler', 'protest', 'south', 'billi', 'weakest', 'fantast', 'fals', 'shoddi', 'naiv', 'apart', 'three', 'countless', 'frame', 'ride', 'lugosi', 'willi', 'sorri', 'feet', '1989', 'banal', 'stole', 'formula', 'lack', 'loos', 'shadow', 'imposs', 'anni', 'nerv', 'brit', 'gadget', 'coup', 'disjoint', 'serial', 'basket', 'unrealist', 'indian', 'lemmon', 'selfish', 'avail', 'notic', 'satisfi', 'flow', 'popular', 'lure', 'good', 'forget', 'releas', 'defend', 'quirki', '2nd', 'corner', 'smoke', 'highli', 'hospit', 'bunch', 'pale', 'baffl', 'ned', 'engin', 'parodi', 'find', 'jacki', 'invis', 'roommat', 'grand', 'nation', 'expert', 'acquir', 'wish', 'jay', 'undeni', 'articl', 'count', 'dentist', 'gabl', 'mistak', 'censor', 'stir', 'cynic', 'salli', 'hapless', 'law', 'rent', 'oz', 'broadway', 'mirror', 'offici', 'kevin', 'frankenstein', 'aris', 'inabl', 'rate', 'done', 'strand', 'underst', 'statu', 'dimension', 'absurd', 'twilight', 'citizen', 'tip', 'claud', 'favourit', 'nichola', 'death', 'exposit', 'spell', '3000', 'show', 'laurenc', 'deliver', 'griffith', 'digit', 'superb', 'spring', 'husband', 'nostalgia', 'tara', 'mind', 'nerd', '28', 'needless', 'bag', 'knightley', 'commit', 'eric', 'stare', 'honor', 'addit', 'bruce', 'involv', 'ice', 'subsequ', 'list', 'better', 'patriot', 'lucki', 'gere', 'sh', 'near', 'spielberg', 'version', 'popul', 'asid', 'possess', 'endless', 'hand', 'uk', 'ron', '1999', 'reviv', 'pal', 'sheriff', 'assign', 'damon', 'rather', 'replac', 'merit', 'amount', 'role', 'blake', 'bizarr', 'carol', 'step', 'organ', 'resist', 'factor', 'surviv', '1969', 'preachi', 'outfit', 'dawson', 'aka', 'demand', 'brazil', 'went', 'arrest', 'saint', 'grudg', 'worn', 'seedi', 'stuff', 'bull', 'jar', 'fuel', 'turtl', 'harder', 'hong', 'children', 'run', 'elderli', 'conan', 'pass', 'total', 'struck', 'alert', 'side', 'heel', 'cross', 'newcom', 'sci', 'treasur', 'uniformli', 'dorothi', 'belong', 'shop', 'spooki', 'stab', 'braveheart', 'dian', 'hardi', 'weekend', 'dynam', 'stalker', 'attract', 'advic', 'legendari', 'number', 'earlier', 'enterpris', 'hideou', 'fabul', '13th', 'climax', 'stallon', 'attitud', 'fresh', 'off', 'ambigu', 'cuban', 'mtv', 'randomli', 'sharp', 'incorrect', 'spree', 'mate', 'surpris', 'shallow', 'delic', 'energi', 'translat', 'choreographi', 'be', 'makeup', 'sign', 'radio', 'boom', 'push', 'woo', 'meander', 'contradict', 'conserv', 'palm', 'ordinari', 'medic', 'startl', 'cat', 'speci', 'poem', 'robber', 'deni', 'six', 'presid', 'whenev', 'gate', 'pile', 'perform', 'complet', 'humili', 'blow', 'commerci', 'brave', 'bleak', 'opposit', '20', 'dalton', 'fred', 'buy', 'spark', 'fishburn', 'mistress', 'candl', 'neglect', 'treatment', 'joy', 'shot', 'sympath', 'genr', 'everi', 'stream', 'myer', 'weird', 'dialog', 'bound', 'compos', 'pleas', 'sensibl', 'land', 'minist', 'rifl', 'combat', 'chase', 'gold', 'hope', 'neg', 'morri', 'bikini', 'london', 'ridden', 'kitti', 'teen', 'unexplain', 'iv', 'jew', 'amanda', 'preview', 'empir', 'linger', 'riot', 'fulli', 'skin', 'faster', 'farrel', 'chamberlain', 'solv', 'host', 'massacr', 'rosario', 'minu', 'screen', '1970', 'timon', '99', 'counter', 'vanish', 'valley', 'vocal', 'jealou', 'violent', 'pay', 'tension', 'messag', 'breed', 'strictli', 'errol', 'rental', 'myth', 'lust', 'dr', 'kubrick', 'pseudo', 'conflict', 'sad', 'comedian', 'sound', 'buzz', 'transport', 'liter', 'conscious', 'anoth', 'strain', 'shown', 'seventi', 'might', 'succe', 'carrey', 'oppress', 'anyon', 'thousand', 'termin', 'almost', 'ralph', 'stumbl', 'funnier', 'tradit', 'uma', 'rout', 'choru', 'ritchi', 'exclus', 'spoke', 'mel', 'razor', 'friday', 'adam', 'ahead', 'question', 'marc', 'element', 'glenn', '45', 'downright', 'easili', 'buck', 'grant', 'tragedi', 'melissa', 'montag', 'argu', 'sub', 'characteris', 'subtitl', 'composit', 'giggl', 'refresh', 'mutant', 'grayson', 'kung', 'hopkin', 'half', 'camp', 'predecessor', 'ultim', 'bad', 'abc', 'bread', 'practic', 'baddi', 'incorpor', 'improv', 'frontier', 'full', 'compens', 'milo', 'flick', 'emma', 'standard', 'review', 'heck', 'corpor', 'greater', 'craze', 'seagal', 'timberlak', 'trick', 'dispos', 'australian', 'blown', 'cameo', 'marriag', 'miracul', 'vice', 'coat', 'recit', 'dancer', 'meet', 'product', 'nazi', 'knight', 'unexpect', 'lesson', 'tierney', 'foil', 'metal', 'central', 'fought', 'schlock', 'previou', 'phone', 'arnold', 'gregori', 'lurk', 'muscl', 'debt', 'savag', 'rub', 'take', 'adapt', 'rescu', 'button', 'watcher', 'common', 'declar', 'hug', '1933', 'ward', 'instinct', 'complic', 'pg', 'pat', 'staff', '12', 'hepburn', 'exit', 'minim', 'dilemma', 'tape', 'nelson', 'contempt', 'unlik', '1984', 'hill', 'shall', 'recogn', 'analysi', 'sex', 'amazingli', 'outer', 'redund', 'lincoln', 'puppi', 'asset', 'fruit', 'slow', 'suffer', 'hate', 'sophist', 'cult', 'hood', 'piti', 'belushi', 'manipul', 'trail', 'inher', 'ex', 'messi', 'rhythm', 'cell', 'season', 'stranger', 'latest', 'oil', 'switch', 'alec', 'joke', 'david', 'cheek', 'competit', 'psychic', 'concert', 'fight', 'antwon', 'transplant', 'villag', 'front', 'twin', 'defeat', 'resurrect', 'employe', 'written', 'comment', 'mitchel', 'recov', 'worri', 'hitchcock', 'told', 'dare', 'wit', 'occas', 'reid', 'casino', 'especi', 'danni', 'station', 'vet', 'freddi', 'supernatur', 'toe', 'felix', 'abandon', 'oppos', 'sailor', 'greatest', 'superbl', 'jule', 'packag', 'smaller', 'vanc', 'faint', 'bend', 'earl', 'martha', 'showcas', 'choppi', 'stewart', 'protect', 'dub', 'gang', 'enthusiast', 'greet', 'cruel', 'humor', 'shock', 'dark', 'alter', 'colonel', 'super', 'obsess', 'drunken', 'owen', 'steve', 'shoot', 'curli', 'abl', 'episod', '16', 'perri', 'godzilla', 'screenwrit', 'halfway', 'disabl', 'intellectu', 'secondari', 'current', 'circu', 'seed', 'amongst', 'neo', 'hostag', 'brown', 'poker', 'director', 'subtl', 'mere', 'speed', 'ask', 'sight', 'despis', 'curtain', 'jump', 'china', 'tackl', 'exagger', 'professor', 'therefor', 'agent', 'function', 'across', 'led', 'monk', 'styliz', 'farc', 'spanish', 'philosoph', 'kick', 'btw', 'caricatur', 'cruis', 'maid', 'everyth', 'disagre', 'play', 'ted', 'sadli', 'attach', 'relev', 'pursu', 'relax', 'access', 'intim', 'studi', 'swim', 'cop', 'pit', 'assert', 'conduct', 'killer', 'jerri', 'beverli', 'cartoon', 'clad', 'graham', 'wait', 'cure', 'colleagu', 'colour', 'blunt', 'lake', 'implaus', 'agenc', 'bought', 'enchant', 'youngest', 'evolut', 'irrelev', 'shelf', 'uplift', 'dana', 'weight', 'mafia', 'sappi', 'modesti', 'high', 'core', 'relat', 'gambl', 'wrestler', 'kapoor', 'orchestr', 'partli', 'valuabl', 'branagh', 'cloth', 'drag', 'unawar', 'prize', 'cap', 'produc', 'florida', 'gotten', 'tender', 'household', 'aim', 'tom', 'rememb', 'movi', 'prop', 'amus', 'busi', 'zane', 'experi', 'gori', 'stack', 'hilar', 'semi', 'enter', 'trier', 'consider', 'correct', 'ariel', 'alexandr', 'motiv', 'cruelti', 'ram', 'korean', '1968', 'confus', 'gotta', 'niro', 'tripe', 'denzel', 'method', 'silenc', 'expos', 'african', 'discern', 'thrown', 'loneli', 'afraid', 'elizabeth', 'museum', 'romanc', 'groan', 'blatant', 'surpass', 'bug', 'turd', 'attenborough', 'girl', 'thumb', 'heartwarm', 'centr', 'jimmi', 'medium', 'ranger', 'diamond', 'sand', 'armstrong', 'filth', 'bulk', 'trend', 'dress', 'nomin', 'presenc', 'capabl', 'antonio', 'hurt', 'mobster', 'constant', 'plane', 'flavor', 'daisi', 'regret', 'daniel', 'bold', 'heist', '1939', 'variou', 'scarfac', 'free', 'unfold', 'pretend', 'institut', 'helpless', 'ideolog', 'wanna', 'goe', 'juli', 'deriv', 'dread', 'visibl', 'coloni', 'chapter', 'flip', 'react', 'rosemari', 'train', 'carpent', 'vanessa', 'emerg', 'doubl', 'roger', '50', 'manner', 'preming', 'previous', 'http', 'wise', 'neat', 'paxton', 'shootout', 'rich', 'injur', 'dolph', 'parad', 'consum', 'shark', 'hopeless', 'nearli', 'german', 'recycl', 'split', 'depress', 'big', 'materi', 'sicken', 'remind', 'nasti', 'recruit', 'perfectli', 'interest', 'modest', '1978', 'stori', 'credit', 'worm', 'hypnot', 'strong', 'comprehens', 'scenario', 'plausibl', 'bridget', 'insan', 'someday', 'bitten', 'relentless', 'rush', 'homer', 'whilst', 'christma', 'freedom', 'heavili', 'suppos', 'famili', 'perman', 'women', 'model', 'height', 'outright', 'fulfil', 'knew', 'deadli', 'violenc', 'thing', 'priceless', 'california', 'want', 'rachel', 'delet', 'handsom', 'murray', 'old', 'passeng', 'waitress', 'best', 'score', 'celebr', 'dougla', 'confin', 'metaphor', 'yeah', 'warrior', 'deer', 'delici', 'basing', 'barrymor', 'submit', 'exquisit', 'look', 'air', 'durat', 'amaz', 'derek', 'perhap', 'disgust', 'schedul', 'purpos', 'grow', 'dame', 'settl', 'meal', 'thief', 'boss', 'horrid', 'dont', 'tactic', 'entertain', 'toilet', 'art', 'creator', 'fact', 'rope', 'silent', 'shelley', 'pleasant', 'ring', 'driver', 'neighborhood', 'hooker', 'globe', 'demonstr', 'lawrenc', 'dinosaur', 'quiet', 'wizard', 'option', 'highway', 'stargat', 'leg', 'fare', 'kurosawa', 'concept', 'teenag', 'overwhelm', 'task', 'brilliantli', 'avoid', 'cup', 'independ', '1950', 'abil', '19', 'decad', 'link', 'pan', 'ya', 'trademark', 'favorit', 'relationship', 'hors', 'charismat', 'million', 'refer', 'bacal', 'ship', 'rise', 'brenda', 'etc', 'mildli', 'pool', 'honesti', 'discuss', 'soderbergh', 'polici', 'let', 'interview', 'sniper', 'place', 'psycho', 'bath', 'idiot', 'decor', '1960', 'neil', 'burn', 'psychiatrist', 'recent', 'batman', 'secretari', 'ton', 'tonight', 'couch', '000', 'archiv', 'domino', 'revolut', 'daughter', 'reveal', 'rock', 'nake', 'raymond', 'shaw', 'slasher', 'cool', 'graphic', 'anymor', 'underneath', 'use', 'strive', 'advis', 'rehears', 'tool', 'glover', 'notori', 'turkey', 'profil', 'ambit', 'money', 'gag', 'bust', 'celluloid', 'flop', 'younger', 'maniac', 'hip', 'din', '1983', 'guard', 'niec', 'aunt', 'laura', 'furthermor', 'wrote', 'rave', '1993', 'wendigo', 'fashion', 'borrow', 'inevit', 'duo', 'highlight', 'glare', 'rooki', 'seek', 'robberi', 'issu', 'era', 'polish', 'overli', 'media', 'hype', 'cigarett', 'van', 'cold', 'skip', 'servant', 'adolesc', 'marvel', 'wrong', 'recognit', 'breathtak', 'west', 'bash', 'bsg', 'chronicl', 'cassidi', 'cue', 'belief', 'nolt', 'peac', 'lighter', 'nois', 'shut', 'ballet', 'unleash', 'face', 'consciou', 'perfect', 'bottom', 'essenti', 'gap', 'hat', 'mighti', 'malon', 'mistaken', 'comb', 'hackman', 'cycl', 'proud', 'agre', 'insight', 'shorter', 'valentin', 'naughti', 'prophet', 'clair', '17', 'dream', 'jet', 'lawyer', 'increasingli', 'creek', 'eaten', 'bye', 'walsh', 'cast', '1972', 'swept', 'overr', 'improvis', 'plight', 'emphasi', 'sinc', 'imag', 'antic', 'threat', 'cohen', 'trait', 'ancient', 'wast', 'interior', 'gentleman', 'highest', 'player', 'fetch', 'helicopt', 'anti', 'ash', 'first', 'traffic', 'rampag', 'holocaust', 'damm', 'assassin', 'held', 'exposur', 'lord', 'kind', 'tale', 'ol', 'improb', 'ritter', 'boo', 'contest', 'hung', 'respect', 'varieti', 'equival', 'carlo', 'convinc', 'third', 'slam', 'boyer', 'climact', 'dumb', 'spain', 'rod', 'phenomenon', 'yawn', 'miscast', 'exchang', 'muddl', 'alison', 'lundgren', 'hay', 'amateur', 'soviet', 'holm', 'cousin', 'child', 'elabor', 'upset', 'glass', 'adventur', 'scope', 'flock', 'lena', 'revers', '1985', 'lauren', 'lit', 'doo', 'wed', 'kong', 'danger', 'dozen', 'crippl', 'earnest', 'outdat', 'file', 'furiou', 'fisher', 'afterward', 'isra', 'begin', 'extra', 'zombi', 'funniest', 'came', 'lion', 'fabric', 'monkey', 'benefit', 'admittedli', 'garbo', 'junior', 'en', 'shift', 'began', 'paus', 'numb', 'fifteen', 'underground', 'jame', 'individu', 'throw', 'spoil', 'short', 'posey', 'hair', 'st', 'sleazi', 'jam', 'interact', 'clear', 'bake', 'merci', 'der', 'plate', 'vampir', 'footag', 'pour', 'muppet', 'album', 'student', 'focu', 'ironi', 'notch', 'mostli', 'hell', 'titan', 'chick', 'union', 'card', 'feed', 'well', 'headach', 'quest', 'washington', 'everyon', 'itali', 'integr', 'horribl', 'vs', 'bay', 'blend', 'seriou', 'bonu', 'kumar', 'falk', 'shape', 'shower', 'actress', 'drive', 'arc', 'victoria', 'enthral', 'bewar', 'undertak', 'stark', 'sceneri', 'nobl', 'ho', 'conceiv', 'bottl', 'reluct', 'promin', 'undoubtedli', 'hello', 'tremend', 'spectacl', 'attorney', 'ingrid', 'reput', 'tragic', 'editor', 'palma', 'climb', 'found', 'redeem', 'clearli', '1920', 'got', 'brush', 'rapidli', 'bollywood', '60', 'fu', 'church', 'attent', 'fever', 'unseen', 'dinner', 'naschi', 'main', 'clock', 'paula', '18', 'along', 'time', 'advertis', 'tech', 'martin', 'instrument', 'punish', 'race', 'dull', 'sinatra', 'phenomen', 'live', 'oldest', 'enough', 'edgar', 'spoof', 'wherea', 'without', 'matthau', 'bigger', 'sword', 'judg', 'stunt', 'wilder', 'odd', 'gear', 'goof', 'awaken', 'biko', 'dish', 'advantag', 'requir', 'walker', 'similar', 'decapit', 'deliv', 'sin', 'urg', 'nowaday', 'plot', 'indulg', 'cia', 'stereotyp', 'believ', 'basketbal', 'lanc', 'project', 'larri', '2003', 'term', '1987', 'tarantino', 'mani', 'oper', 'india', 'cloud', 'stage', 'belov', 'transit', 'esther', 'clip', 'agenda', 'draw', 'food', 'quietli', 'breakdown', 'destin', 'incoher', 'hold', 'color', 'bogu', 'bang', 'defens', 'basement', 'passabl', 'hole', 'mask', 'twist', 'histori', 'nearbi', 'soul', 'twelv', 'mouth', 'serv', 'childhood', 'ghost', 'congratul', 'kathryn', 'top', 'un', 'shanghai', 'flair', 'uninterest', 'dylan', 'robert', 'cinderella', 'succeed', 'poverti', 'europ', 'resort', 'sixti', 'tasteless', 'landscap', 'machin', 'nobodi', 'beatti', 'cover', 'frantic', 'kudo', 'minor', 'rat', 'court', 'rant', 'nonsens', 'cd', 'blank', 'adequ', 'despic', 'mermaid', 'prostitut', 'search', 'ga', 'et', 'airplan', 'patrick', 'opera', 'europa', 'street', 'legal', 'tooth', 'dictat', 'skill', 'smart', 'what', 'decid', 'setup', 'jeremi', 'poster', 'kolchak', 'weather', 'happen', 'although', 'subject', 'rex', 'optimist', 'arguabl', 'critiqu', 'craft', 'phillip', 'becam', 'critic', 'academi', 'birth', 'champion', 'self', 'distract', 'harvey', 'must', 'josh', 'despair', 'snap', 'widow', 'buster', 'forc', 'radic', 'lost', 'inan', 'oh', 'porn', 'overcom', 'slightest', 'watchabl', 'progress', 'bbc', 'titl', 'loud', 'clue', 'bett', 'adopt', 'end', 'wood', 'coupl', 'invent', 'seduct', 'clan', 'legaci', 'seven', 'crude', 'emili', 'skull', 'internet', '3d', 'wide', 'liber', 'ponder', 'bounc', 'legitim', 'clerk', 'cheat', 'ustinov', 'sunshin', 'addict', 'underli', 'moron', 'slowli', 'test', 'narr', 'foolish', 'san', 'insert', 'text', 'lifestyl', 'macarthur', 'nuditi', 'proclaim', 'prime', 'wet', 'travesti', 'compar', 'sympathi', 'bacon', 'match', 'digniti', 'clever', 'cake', 'will', 'reliabl', 'angl', 'size', 'spoiler', 'rape', 'sensual', 'stop', 'luca', 'amitabh', 'paramount', 'ill', 'edward', 'attend', 'furi', 'fix', 'consequ', 'await', 'reunion', 'jenni', 'trashi', 'duel', 'donald', 'duti', 'rip', 'obstacl', 'meg', 'squad', 'june', 'spacey', 'rocki', 'suffici', 'besid', 'sure', 'christoph', 'porno', 'indi', 'actual', 'least', 'audrey', 'certainli', 'millionair', 'panic', 'dave', 'leagu', 'cliff', 'spontan', 'profound', 'susan', 'lewi', 'trilog', 'alarm', 'serious', 'import', 'lame', 'toronto', 'descend', 'valid', 'alleg', 'cooki', 'area', 'spray', 'filmmak', 'fi', 'ever', 'cost', 'heartbreak', 'salt', 'worthwhil', 'tens', 'viru', 'substitut', 'lester', 'culmin', 'realli', 'strang', 'armi', 'blind', 'budget', 'latin', 'sweep', 'dement', 'ginger', 'unimagin', 'ticket', 'spit', 'blue', 'poison', 'enemi', 'rubber', 'heaven', 'dog', 'boast', 'sidney', 'custom', 'natali', 'bank', 'author', 'retain', 'pair', 'induc', 'matter', 'tempt', 'voight', 'unorigin', 'spinal', 'curti', 'classic', 'grasp', 'kidnap', 'primari', 'blast', 'kyle', 'territori', 'call', 'achiev', 'snake', 'asylum', 'tiresom', 'sleep', 'delv', 'karen', 'prank', 'co', 'fighter', 'hal', 'troop', 'bibl', 'screw', 'downey', 'pro', 'impos', 'austin', 'abound', 'vision', 'basebal', 'see', 'cecil', 'greatli', 'romp', 'relief', 'true', 'sleepwalk', 'script', 'check', 'dealt', 'diseas', 'exhaust', 'length', 'sarah', 'tough', 'cathol', 'morn', 'drone', 'britain', 'arrang', 'taught', 'curiou', 'convey', 'compani', 'goofi', 'unfunni', 'russia', 'virtual', 'round', '2002', 'tour', 'lifeless', 'morgan', 'iii', 'forgot', 'inaccur', 'satir', 'boyl', 'establish', 'overlong', 'resum', 'ego', '2004', 'asian', 'blew', 'scratch', 'may', 'dean', 'island', 'refus', 'scene', 'spine', 'manag', 'milk', 'crap', 'vulgar', 'distort', 'newspap', 'simplic', 'min', 'biker', 'topic', 'chip', 'liner', 'evok', 'immatur', 'minut', 'catherin', 'aw', 'ala', 'foster', 'leap', 'puppet', 'parker', 'othello', 'unsettl', 'path', 'subtli', 'slice', 'right', 'block', 'nemesi', 'user', 'realis', 'convers', 'thug', 'mystic', 'icon', 'higher', 'expens', 'illustr', 'born', 'fellow', 'endur', 'writer', 'capot', 'hyster', 'alway', 'noth', 'construct', 'inherit', 'traci', 'caught', 'altman', 'helm', 'dwarf', 'drain', 'song', 'experienc', 'futurist', 'etern', 'took', 'beg', 'reserv', 'crappi', 'cuba', 'bubbl', 'injuri', 'chicken', 'collect', 'jacket', 'nicol', 'kiss', 'militari', 'publish', 'slip', 'goer', 'bourn', 'mickey', 'debat', 'scar', 'whoopi', 'nicholson', 'teeth', 'earli', 'gothic', 'girlfriend', 'menac', 'genuin', 'drunk', 'cleverli', 'peck', 'cgi', 'proof', 'exact', 'hammi', 'fonda', 'brought', 'success', 'justin', 'franki', 'forti', 'petti', 'influenc', 'joel', 'vh', 'toler', 'describ', 'travel', 'communist', '1976', 'proceed', 'buddi', 'alex', 'fiction', 'amor', 'decent', 'littl', 'painter', 'creep', 'alicia', 'peer', 'dud', 'pull', 'julian', 'fluid', 'bravo', 'soldier', 'orient', 'assur', 'owe', 'dialogu', 'planet', 'moral', 'roth', 'shi', 'sea', 'stand', 'devil', 'summer', 'blatantli', 'mob', 'rambo', 'come', 'unknown', 'cinematograph', 'yeti', 'scienc', 'sheet', 'detail', 'thriller', 'anger', 'bar', 'bird', 'locat', 'gem', 'depict', 'fontain', 'mean', 'frequent', 'athlet', 'bring', 'regardless', 'devoid', 'ten', 'ugli', 'calm', 'second', 'indic', 'wall', 'anybodi', 'deaf', 'fest', 'virginia', 'stoog', 'joe', 'realm', 'unit', 'bathroom', 'forward', 'anthoni', 'distinguish', 'arriv', 'disappear', 'loss', 'linda', 'miller', 'plain', 'enorm', 'ensembl', 'mountain', 'anyway', 'asham', 'le', 'cut', 'club', 'deniro', 'hitler', 'gene', 'massiv', 'march', 'marri', 'immort', 'four', 'duck', 'yet', 'tourist', 'load', 'superhero', 'wolf', 'indiffer', 'easi', 'useless', 'venom', 'send', 'burton', 'audio', 'ami', 'jeffrey', 'traumat', 'nowher', 'seat', 'jesu', 'precis', 'perspect', 'hey', 'spike', 'comfort', 'wonder', 'evid', 'grey', 'thread', 'util', 'drama', 'instantli', 'detach', 'scotland', 'wilson', 'explod', 'bean', 'raj', 'favour', 'met', 'scroog', 'temper', 'mol', 'heavi', 'visual', 'homosexu', 'hardcor', 'nightclub', 'whether', 'screenplay', 'handl', 'stretch', 'turn', '1980', 'foot', 'virtu', 'even', 'larg', 'abysm', 'testament', 'disguis', 'moon', 'chang', 'distanc', 'kate', 'ray', 'hain', 'fame', 'footbal', 'gradual', 'quarter', 'answer', 'sever', 'embarrass', 'meaning', 'intric', 'stood', 'broadcast', 'maintain', 'surfac', 'healthi', 'bias', 'pacino', 'orlean', 'differ', 'magnific', 'sens', 'harsh', 'jordan', 'year', 'unless', 'boxer', 'worship', 'terribl', 'featur', 'analyz', 'disast', 'understand', 'logic', 'earth', 'other', 'principl', 'rhyme', 'dysfunct', 'chain', '70', 'spot', 'sir', 'carla', 'martial', 'distribut', 'drawn', 'bloodi', 'wreck', 'would', 'down', 'connor', 'pattern', 'nude', 'instant', 'pleasur', 'vast', 'form', 'save', 'dismiss', 'slimi', 'assort', 'cultur', 'grandfath', 'system', 'gorgeou', 'walken', 'exot', 'river', 'assault', 'foxx', 'misfortun', 'gilliam', 'tire', 'grotesqu', 'beatl', 'mar', 'stephen', 'engag', 'assist', 'fianc', 'terrif', 'pierc', 'worker', 'da', 'unbear', 'often', 'stellar', 'notabl', 'mobil', 'lock', 'howl', 'judd', 'occup', 'stanley', 'abrupt', 'set', 'sammi', 'whine', 'sublim', 'bond', 'repeatedli', 'turner', 'glori', 'queen', 'sissi', 'henri', 'fart', 'absent', 'intens', 'mark', 'prom', 'canadian', 'artist', 'farm', 'countri', 'maria', 'absolut', 'occur', 'automat', 'qualifi', 'familiar', 'rabbit', 'junk', 'back', 'cooper', 'chainsaw', 'hot', 'com', 'rude', 'expedit', 'nanci', 'storytel', 'environ', 'fell', 'trailer', 'talki', 'promot', 'grim', 'post', 'fanci', 'mike', 'cant', 'keen', 'immens', 'particularli', 'scream', 'unpredict', 'mario', 'smash', 'che'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. # new_xgb = None # Solution: new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. # Solution: new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-05-05 02:37:35 Starting - Starting the training job... 2020-05-05 02:37:36 Starting - Launching requested ML instances...... 2020-05-05 02:39:03 Starting - Preparing the instances for training...... 2020-05-05 02:39:52 Downloading - Downloading input data... 2020-05-05 02:40:29 Training - Training image download completed. Training in progress..Arguments: train [2020-05-05:02:40:30:INFO] Running standalone xgboost training. [2020-05-05:02:40:30:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8483.98mb [2020-05-05:02:40:30:INFO] Determined delimiter of CSV input is ',' [02:40:30] S3DistributionType set as FullyReplicated [02:40:32] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-05-05:02:40:32:INFO] Determined delimiter of CSV input is ',' [02:40:32] S3DistributionType set as FullyReplicated [02:40:33] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [02:40:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.307933#011validation-error:0.3149 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [02:40:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 8 pruned nodes, max_depth=5 [1]#011train-error:0.2924#011validation-error:0.3031 [02:40:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.278533#011validation-error:0.2897 [02:40:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.259533#011validation-error:0.2764 [02:40:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.2662#011validation-error:0.2805 [02:40:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.261733#011validation-error:0.2774 [02:40:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.253933#011validation-error:0.2709 [02:40:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [7]#011train-error:0.248933#011validation-error:0.2638 [02:40:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [8]#011train-error:0.245867#011validation-error:0.2614 [02:40:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.242267#011validation-error:0.2546 [02:40:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.231467#011validation-error:0.2477 [02:40:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [11]#011train-error:0.224533#011validation-error:0.2428 [02:40:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [12]#011train-error:0.220267#011validation-error:0.2415 [02:40:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [13]#011train-error:0.217867#011validation-error:0.2391 [02:40:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [14]#011train-error:0.213533#011validation-error:0.2368 [02:40:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 18 pruned nodes, max_depth=5 [15]#011train-error:0.209267#011validation-error:0.2346 [02:40:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [16]#011train-error:0.208533#011validation-error:0.2346 [02:40:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 18 pruned nodes, max_depth=5 [17]#011train-error:0.205933#011validation-error:0.2323 [02:41:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [18]#011train-error:0.2036#011validation-error:0.2307 [02:41:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [19]#011train-error:0.1982#011validation-error:0.2244 [02:41:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [20]#011train-error:0.197667#011validation-error:0.2239 [02:41:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.1954#011validation-error:0.2196 [02:41:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 16 pruned nodes, max_depth=5 [22]#011train-error:0.192133#011validation-error:0.2169 [02:41:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.193333#011validation-error:0.2176 [02:41:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [24]#011train-error:0.189733#011validation-error:0.2144 [02:41:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [25]#011train-error:0.188333#011validation-error:0.2138 [02:41:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [26]#011train-error:0.185533#011validation-error:0.2126 [02:41:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [27]#011train-error:0.182867#011validation-error:0.2101 [02:41:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [28]#011train-error:0.1802#011validation-error:0.2088 [02:41:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [29]#011train-error:0.178867#011validation-error:0.2078 [02:41:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [30]#011train-error:0.1756#011validation-error:0.2053 [02:41:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [31]#011train-error:0.174333#011validation-error:0.2036 [02:41:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [32]#011train-error:0.172867#011validation-error:0.2028 [02:41:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5 [33]#011train-error:0.171#011validation-error:0.2031 [02:41:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.170133#011validation-error:0.201 [02:41:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [35]#011train-error:0.169467#011validation-error:0.2001 [02:41:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [36]#011train-error:0.168333#011validation-error:0.2004 [02:41:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [37]#011train-error:0.166867#011validation-error:0.1988 [02:41:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [38]#011train-error:0.166867#011validation-error:0.1987 [02:41:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [39]#011train-error:0.166333#011validation-error:0.1981 [02:41:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [40]#011train-error:0.164533#011validation-error:0.1979 [02:41:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [41]#011train-error:0.163067#011validation-error:0.1959 [02:41:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [42]#011train-error:0.161867#011validation-error:0.1951 [02:41:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [43]#011train-error:0.161467#011validation-error:0.1939 [02:41:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [44]#011train-error:0.159533#011validation-error:0.1926 [02:41:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [45]#011train-error:0.1588#011validation-error:0.1925 [02:41:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [46]#011train-error:0.156933#011validation-error:0.1921 [02:41:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [47]#011train-error:0.1558#011validation-error:0.1915 [02:41:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5 [48]#011train-error:0.154133#011validation-error:0.1912 [02:41:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [49]#011train-error:0.1536#011validation-error:0.1906 [02:41:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [50]#011train-error:0.1514#011validation-error:0.1911 [02:41:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [51]#011train-error:0.151333#011validation-error:0.1911 [02:41:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [52]#011train-error:0.151733#011validation-error:0.1906 [02:41:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [53]#011train-error:0.150667#011validation-error:0.1889 [02:41:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [54]#011train-error:0.1494#011validation-error:0.1895 [02:41:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [55]#011train-error:0.149467#011validation-error:0.1888 [02:41:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 0 pruned nodes, max_depth=5 [56]#011train-error:0.148#011validation-error:0.1885 [02:41:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5 [57]#011train-error:0.1474#011validation-error:0.1879 [02:41:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [58]#011train-error:0.145#011validation-error:0.1858 [02:41:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5 [59]#011train-error:0.144867#011validation-error:0.1853 [02:41:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [60]#011train-error:0.1442#011validation-error:0.1853 [02:41:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [61]#011train-error:0.143667#011validation-error:0.1837 [02:41:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [62]#011train-error:0.1428#011validation-error:0.1837 [02:41:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [63]#011train-error:0.141867#011validation-error:0.183 [02:41:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 16 pruned nodes, max_depth=5 [64]#011train-error:0.141467#011validation-error:0.1831 [02:42:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [65]#011train-error:0.141267#011validation-error:0.1826 [02:42:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5 [66]#011train-error:0.1398#011validation-error:0.1822 [02:42:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [67]#011train-error:0.139867#011validation-error:0.1823 [02:42:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [68]#011train-error:0.140133#011validation-error:0.1828 [02:42:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [69]#011train-error:0.1392#011validation-error:0.1822 [02:42:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [70]#011train-error:0.139667#011validation-error:0.1831 [02:42:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [71]#011train-error:0.139133#011validation-error:0.1835 [02:42:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [72]#011train-error:0.138267#011validation-error:0.1835 [02:42:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [73]#011train-error:0.1386#011validation-error:0.1822 [02:42:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [74]#011train-error:0.137933#011validation-error:0.1825 [02:42:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [75]#011train-error:0.136933#011validation-error:0.1818 [02:42:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [76]#011train-error:0.1368#011validation-error:0.1813 [02:42:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [77]#011train-error:0.136867#011validation-error:0.1813 [02:42:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5 [78]#011train-error:0.1362#011validation-error:0.1814 [02:42:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [79]#011train-error:0.1366#011validation-error:0.1815 [02:42:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5 [80]#011train-error:0.135933#011validation-error:0.182 [02:42:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [81]#011train-error:0.1354#011validation-error:0.1817 [02:42:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [82]#011train-error:0.134667#011validation-error:0.1821 [02:42:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [83]#011train-error:0.134667#011validation-error:0.1813 [02:42:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [84]#011train-error:0.1338#011validation-error:0.1811 [02:42:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [85]#011train-error:0.1324#011validation-error:0.1804 [02:42:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [86]#011train-error:0.132#011validation-error:0.1798 [02:42:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [87]#011train-error:0.1314#011validation-error:0.1794 [02:42:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5 [88]#011train-error:0.1302#011validation-error:0.1785 [02:42:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5 [89]#011train-error:0.13#011validation-error:0.1785 [02:42:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [90]#011train-error:0.129733#011validation-error:0.1785 [02:42:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5 [91]#011train-error:0.128933#011validation-error:0.1784 [02:42:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [92]#011train-error:0.129#011validation-error:0.1789 [02:42:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [93]#011train-error:0.128733#011validation-error:0.1785 [02:42:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [94]#011train-error:0.1288#011validation-error:0.179 [02:42:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5 [95]#011train-error:0.128933#011validation-error:0.1789 [02:42:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5 [96]#011train-error:0.1278#011validation-error:0.1791 [02:42:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5 [97]#011train-error:0.127467#011validation-error:0.1785 [02:42:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [98]#011train-error:0.127333#011validation-error:0.179 [02:42:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 16 pruned nodes, max_depth=5 [99]#011train-error:0.126933#011validation-error:0.1796 [02:42:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [100]#011train-error:0.1272#011validation-error:0.1792 [02:42:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 0 pruned nodes, max_depth=5 [101]#011train-error:0.126267#011validation-error:0.1795 Stopping. Best iteration: [91]#011train-error:0.128933#011validation-error:0.1784  2020-05-05 02:42:54 Uploading - Uploading generated training model 2020-05-05 02:42:54 Completed - Training job completed Training seconds: 182 Billable seconds: 182 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ...................Arguments: serve [2020-05-05 02:46:19 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-05 02:46:19 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-05 02:46:19 +0000] [1] [INFO] Using worker: gevent [2020-05-05 02:46:19 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-05 02:46:19 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-05 02:46:19 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-05 02:46:19 +0000] [41] [INFO] Booting worker with pid: 41 [2020-05-05:02:46:19:INFO] Model loaded successfully for worker : 38 [2020-05-05:02:46:19:INFO] Model loaded successfully for worker : 39 [2020-05-05:02:46:19:INFO] Model loaded successfully for worker : 40 [2020-05-05:02:46:19:INFO] Model loaded successfully for worker : 41 [2020-05-05:02:46:49:INFO] Sniff delimiter as ',' [2020-05-05:02:46:49:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:50:INFO] Sniff delimiter as ',' [2020-05-05:02:46:50:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:50:INFO] Sniff delimiter as ',' [2020-05-05:02:46:50:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:50:INFO] Sniff delimiter as ',' [2020-05-05:02:46:50:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:49:INFO] Sniff delimiter as ',' [2020-05-05:02:46:49:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:50:INFO] Sniff delimiter as ',' [2020-05-05:02:46:50:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:50:INFO] Sniff delimiter as ',' [2020-05-05:02:46:50:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:50:INFO] Sniff delimiter as ',' [2020-05-05:02:46:50:INFO] Determined delimiter of CSV input is ',' 2020-05-05T02:46:47.570:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-05:02:46:52:INFO] Sniff delimiter as ',' [2020-05-05:02:46:52:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:52:INFO] Sniff delimiter as ',' [2020-05-05:02:46:52:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:52:INFO] Sniff delimiter as ',' [2020-05-05:02:46:52:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:52:INFO] Sniff delimiter as ',' [2020-05-05:02:46:52:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:53:INFO] Sniff delimiter as ',' [2020-05-05:02:46:53:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:53:INFO] Sniff delimiter as ',' [2020-05-05:02:46:53:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:53:INFO] Sniff delimiter as ',' [2020-05-05:02:46:53:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:53:INFO] Sniff delimiter as ',' [2020-05-05:02:46:53:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:54:INFO] Sniff delimiter as ',' [2020-05-05:02:46:54:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:54:INFO] Sniff delimiter as ',' [2020-05-05:02:46:54:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:55:INFO] Sniff delimiter as ',' [2020-05-05:02:46:55:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:55:INFO] Sniff delimiter as ',' [2020-05-05:02:46:55:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:55:INFO] Sniff delimiter as ',' [2020-05-05:02:46:55:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:55:INFO] Sniff delimiter as ',' [2020-05-05:02:46:55:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:57:INFO] Sniff delimiter as ',' [2020-05-05:02:46:57:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:57:INFO] Sniff delimiter as ',' [2020-05-05:02:46:57:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:57:INFO] Sniff delimiter as ',' [2020-05-05:02:46:57:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:57:INFO] Sniff delimiter as ',' [2020-05-05:02:46:57:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:57:INFO] Sniff delimiter as ',' [2020-05-05:02:46:57:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:57:INFO] Sniff delimiter as ',' [2020-05-05:02:46:57:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:58:INFO] Sniff delimiter as ',' [2020-05-05:02:46:58:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:58:INFO] Sniff delimiter as ',' [2020-05-05:02:46:58:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:59:INFO] Sniff delimiter as ',' [2020-05-05:02:46:59:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:46:59:INFO] Sniff delimiter as ',' [2020-05-05:02:46:59:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:00:INFO] Sniff delimiter as ',' [2020-05-05:02:47:00:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:00:INFO] Sniff delimiter as ',' [2020-05-05:02:47:00:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:00:INFO] Sniff delimiter as ',' [2020-05-05:02:47:00:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:00:INFO] Sniff delimiter as ',' [2020-05-05:02:47:00:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:00:INFO] Sniff delimiter as ',' [2020-05-05:02:47:00:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:00:INFO] Sniff delimiter as ',' [2020-05-05:02:47:00:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:02:INFO] Sniff delimiter as ',' [2020-05-05:02:47:02:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:02:INFO] Sniff delimiter as ',' [2020-05-05:02:47:02:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:02:INFO] Sniff delimiter as ',' [2020-05-05:02:47:02:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:02:INFO] Sniff delimiter as ',' [2020-05-05:02:47:02:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:02:INFO] Sniff delimiter as ',' [2020-05-05:02:47:02:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:02:INFO] Sniff delimiter as ',' [2020-05-05:02:47:02:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:02:INFO] Sniff delimiter as ',' [2020-05-05:02:47:02:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:02:INFO] Sniff delimiter as ',' [2020-05-05:02:47:02:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:04:INFO] Sniff delimiter as ',' [2020-05-05:02:47:04:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:04:INFO] Sniff delimiter as ',' [2020-05-05:02:47:04:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:05:INFO] Sniff delimiter as ',' [2020-05-05:02:47:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:05:INFO] Sniff delimiter as ',' [2020-05-05:02:47:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:05:INFO] Sniff delimiter as ',' [2020-05-05:02:47:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:05:INFO] Sniff delimiter as ',' [2020-05-05:02:47:05:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:07:INFO] Sniff delimiter as ',' [2020-05-05:02:47:07:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:07:INFO] Sniff delimiter as ',' [2020-05-05:02:47:07:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:07:INFO] Sniff delimiter as ',' [2020-05-05:02:47:07:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:07:INFO] Sniff delimiter as ',' [2020-05-05:02:47:07:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:07:INFO] Sniff delimiter as ',' [2020-05-05:02:47:07:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:07:INFO] Sniff delimiter as ',' [2020-05-05:02:47:07:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:07:INFO] Sniff delimiter as ',' [2020-05-05:02:47:07:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:07:INFO] Sniff delimiter as ',' [2020-05-05:02:47:07:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:09:INFO] Sniff delimiter as ',' [2020-05-05:02:47:09:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:09:INFO] Sniff delimiter as ',' [2020-05-05:02:47:09:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:09:INFO] Sniff delimiter as ',' [2020-05-05:02:47:09:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:09:INFO] Sniff delimiter as ',' [2020-05-05:02:47:09:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:09:INFO] Sniff delimiter as ',' [2020-05-05:02:47:09:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:10:INFO] Sniff delimiter as ',' [2020-05-05:02:47:10:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:09:INFO] Sniff delimiter as ',' [2020-05-05:02:47:09:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:10:INFO] Sniff delimiter as ',' [2020-05-05:02:47:10:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:12:INFO] Sniff delimiter as ',' [2020-05-05:02:47:12:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:12:INFO] Sniff delimiter as ',' [2020-05-05:02:47:12:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:12:INFO] Sniff delimiter as ',' [2020-05-05:02:47:12:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:12:INFO] Sniff delimiter as ',' [2020-05-05:02:47:12:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:12:INFO] Sniff delimiter as ',' [2020-05-05:02:47:12:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:12:INFO] Sniff delimiter as ',' [2020-05-05:02:47:12:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:12:INFO] Sniff delimiter as ',' [2020-05-05:02:47:12:INFO] Determined delimiter of CSV input is ',' [2020-05-05:02:47:12:INFO] Sniff delimiter as ',' [2020-05-05:02:47:12:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-west-2-202593872157/xgboost-2020-05-05-02-43-18-908/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. # new_xgb_endpoint_config_name = None # Solution: new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. # new_xgb_endpoint_config_info = None # Solution: new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words # review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-05-03 14:12:09 Starting - Starting the training job... 2020-05-03 14:12:12 Starting - Launching requested ML instances... 2020-05-03 14:13:09 Starting - Preparing the instances for training......... 2020-05-03 14:14:12 Downloading - Downloading input data... 2020-05-03 14:15:08 Training - Training image download completed. Training in progress..Arguments: train [2020-05-03:14:15:09:INFO] Running standalone xgboost training. [2020-05-03:14:15:09:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8481.02mb [2020-05-03:14:15:09:INFO] Determined delimiter of CSV input is ',' [14:15:09] S3DistributionType set as FullyReplicated [14:15:11] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-05-03:14:15:11:INFO] Determined delimiter of CSV input is ',' [14:15:11] S3DistributionType set as FullyReplicated [14:15:12] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [14:15:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.294667#011validation-error:0.3023 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [14:15:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 10 pruned nodes, max_depth=5 [1]#011train-error:0.278867#011validation-error:0.2897 [14:15:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.277333#011validation-error:0.2898 [14:15:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.274867#011validation-error:0.2891 [14:15:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [4]#011train-error:0.2582#011validation-error:0.2762 [14:15:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.250333#011validation-error:0.2685 [14:15:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.244#011validation-error:0.2597 [14:15:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [7]#011train-error:0.236267#011validation-error:0.2498 [14:15:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.228#011validation-error:0.2434 [14:15:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [9]#011train-error:0.221333#011validation-error:0.238 [14:15:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.218067#011validation-error:0.2337 [14:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.2126#011validation-error:0.2315 [14:15:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.206933#011validation-error:0.2281 [14:15:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [13]#011train-error:0.205067#011validation-error:0.2234 [14:15:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [14]#011train-error:0.200867#011validation-error:0.2203 [14:15:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [15]#011train-error:0.198067#011validation-error:0.2165 [14:15:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.1966#011validation-error:0.2146 [14:15:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [17]#011train-error:0.193467#011validation-error:0.2126 [14:15:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [18]#011train-error:0.190267#011validation-error:0.2107 [14:15:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5 [19]#011train-error:0.188267#011validation-error:0.2095 [14:15:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [20]#011train-error:0.1858#011validation-error:0.2063 [14:15:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.183533#011validation-error:0.2023 [14:15:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [22]#011train-error:0.1788#011validation-error:0.1982 [14:15:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [23]#011train-error:0.1788#011validation-error:0.1993 [14:15:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [24]#011train-error:0.177733#011validation-error:0.1966 [14:15:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [25]#011train-error:0.175133#011validation-error:0.1947 [14:15:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [26]#011train-error:0.171933#011validation-error:0.1924 [14:15:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [27]#011train-error:0.1698#011validation-error:0.1908 [14:15:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5 [28]#011train-error:0.166067#011validation-error:0.1886 [14:15:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [29]#011train-error:0.163867#011validation-error:0.1872 [14:15:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [30]#011train-error:0.162133#011validation-error:0.1849 [14:15:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.159667#011validation-error:0.1843 [14:15:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [32]#011train-error:0.158067#011validation-error:0.1831 [14:15:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [33]#011train-error:0.158467#011validation-error:0.1808 [14:16:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [34]#011train-error:0.156267#011validation-error:0.1815 [14:16:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [35]#011train-error:0.155733#011validation-error:0.1796 [14:16:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [36]#011train-error:0.156467#011validation-error:0.179 [14:16:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [37]#011train-error:0.155533#011validation-error:0.1793 [14:16:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [38]#011train-error:0.153733#011validation-error:0.1787 [14:16:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.1522#011validation-error:0.178 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ...................Arguments: serve [2020-05-03 12:40:19 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-03 12:40:19 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-03 12:40:19 +0000] [1] [INFO] Using worker: gevent [2020-05-03 12:40:19 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-03 12:40:19 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-03 12:40:19 +0000] [41] [INFO] Booting worker with pid: 41 [2020-05-03 12:40:19 +0000] [42] [INFO] Booting worker with pid: 42 [2020-05-03:12:40:19:INFO] Model loaded successfully for worker : 39 [2020-05-03:12:40:19:INFO] Model loaded successfully for worker : 40 [2020-05-03:12:40:19:INFO] Model loaded successfully for worker : 41 [2020-05-03:12:40:19:INFO] Model loaded successfully for worker : 42 [2020-05-03:12:40:42:INFO] Sniff delimiter as ',' [2020-05-03:12:40:42:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:43:INFO] Sniff delimiter as ',' [2020-05-03:12:40:43:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:43:INFO] Sniff delimiter as ',' [2020-05-03:12:40:43:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:43:INFO] Sniff delimiter as ',' [2020-05-03:12:40:43:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:45:INFO] Sniff delimiter as ',' [2020-05-03:12:40:45:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:45:INFO] Sniff delimiter as ',' [2020-05-03:12:40:45:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:45:INFO] Sniff delimiter as ',' [2020-05-03:12:40:45:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:45:INFO] Sniff delimiter as ',' [2020-05-03:12:40:45:INFO] Determined delimiter of CSV input is ',' 2020-05-03T12:40:40.233:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-03:12:40:48:INFO] Sniff delimiter as ',' [2020-05-03:12:40:48:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:48:INFO] Sniff delimiter as ',' [2020-05-03:12:40:48:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:48:INFO] Sniff delimiter as ',' [2020-05-03:12:40:48:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:48:INFO] Sniff delimiter as ',' [2020-05-03:12:40:48:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:48:INFO] Sniff delimiter as ',' [2020-05-03:12:40:48:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:48:INFO] Sniff delimiter as ',' [2020-05-03:12:40:48:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:49:INFO] Sniff delimiter as ',' [2020-05-03:12:40:49:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:49:INFO] Sniff delimiter as ',' [2020-05-03:12:40:49:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:50:INFO] Sniff delimiter as ',' [2020-05-03:12:40:50:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:51:INFO] Sniff delimiter as ',' [2020-05-03:12:40:51:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:50:INFO] Sniff delimiter as ',' [2020-05-03:12:40:50:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:51:INFO] Sniff delimiter as ',' [2020-05-03:12:40:51:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:51:INFO] Sniff delimiter as ',' [2020-05-03:12:40:51:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:51:INFO] Sniff delimiter as ',' [2020-05-03:12:40:51:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:52:INFO] Sniff delimiter as ',' [2020-05-03:12:40:52:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:52:INFO] Sniff delimiter as ',' [2020-05-03:12:40:52:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:53:INFO] Sniff delimiter as ',' [2020-05-03:12:40:53:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:53:INFO] Sniff delimiter as ',' [2020-05-03:12:40:53:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:53:INFO] Sniff delimiter as ',' [2020-05-03:12:40:53:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:53:INFO] Sniff delimiter as ',' [2020-05-03:12:40:53:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:53:INFO] Sniff delimiter as ',' [2020-05-03:12:40:53:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:53:INFO] Sniff delimiter as ',' [2020-05-03:12:40:53:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:55:INFO] Sniff delimiter as ',' [2020-05-03:12:40:55:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:55:INFO] Sniff delimiter as ',' [2020-05-03:12:40:55:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:55:INFO] Sniff delimiter as ',' [2020-05-03:12:40:55:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:55:INFO] Sniff delimiter as ',' [2020-05-03:12:40:55:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:55:INFO] Sniff delimiter as ',' [2020-05-03:12:40:55:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:55:INFO] Sniff delimiter as ',' [2020-05-03:12:40:55:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:56:INFO] Sniff delimiter as ',' [2020-05-03:12:40:56:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:56:INFO] Sniff delimiter as ',' [2020-05-03:12:40:56:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:57:INFO] Sniff delimiter as ',' [2020-05-03:12:40:57:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:57:INFO] Sniff delimiter as ',' [2020-05-03:12:40:57:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:57:INFO] Sniff delimiter as ',' [2020-05-03:12:40:57:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:57:INFO] Sniff delimiter as ',' [2020-05-03:12:40:57:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:57:INFO] Sniff delimiter as ',' [2020-05-03:12:40:57:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:57:INFO] Sniff delimiter as ',' [2020-05-03:12:40:57:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:59:INFO] Sniff delimiter as ',' [2020-05-03:12:40:59:INFO] Sniff delimiter as ',' [2020-05-03:12:40:59:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:40:59:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:00:INFO] Sniff delimiter as ',' [2020-05-03:12:41:00:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:00:INFO] Sniff delimiter as ',' [2020-05-03:12:41:00:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:00:INFO] Sniff delimiter as ',' [2020-05-03:12:41:00:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:00:INFO] Sniff delimiter as ',' [2020-05-03:12:41:00:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:01:INFO] Sniff delimiter as ',' [2020-05-03:12:41:01:INFO] Sniff delimiter as ',' [2020-05-03:12:41:01:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:01:INFO] Sniff delimiter as ',' [2020-05-03:12:41:01:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:01:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:01:INFO] Sniff delimiter as ',' [2020-05-03:12:41:01:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:02:INFO] Sniff delimiter as ',' [2020-05-03:12:41:02:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:02:INFO] Sniff delimiter as ',' [2020-05-03:12:41:02:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:02:INFO] Sniff delimiter as ',' [2020-05-03:12:41:02:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:02:INFO] Sniff delimiter as ',' [2020-05-03:12:41:02:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:03:INFO] Sniff delimiter as ',' [2020-05-03:12:41:03:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:03:INFO] Sniff delimiter as ',' [2020-05-03:12:41:03:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:03:INFO] Sniff delimiter as ',' [2020-05-03:12:41:03:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:03:INFO] Sniff delimiter as ',' [2020-05-03:12:41:03:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:05:INFO] Sniff delimiter as ',' [2020-05-03:12:41:05:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:05:INFO] Sniff delimiter as ',' [2020-05-03:12:41:05:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:05:INFO] Sniff delimiter as ',' [2020-05-03:12:41:05:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:05:INFO] Sniff delimiter as ',' [2020-05-03:12:41:05:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:05:INFO] Sniff delimiter as ',' [2020-05-03:12:41:05:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:05:INFO] Sniff delimiter as ',' [2020-05-03:12:41:05:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:06:INFO] Sniff delimiter as ',' [2020-05-03:12:41:06:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:41:06:INFO] Sniff delimiter as ',' [2020-05-03:12:41:06:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.2 KiB (4.4 MiB/s) with 1 file(s) remaining Completed 369.2 KiB/369.2 KiB (6.2 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-west-2-463062221782/xgboost-2020-05-03-12-37-18-383/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ....................Arguments: serve [2020-05-03 12:55:32 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-03 12:55:32 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-03 12:55:32 +0000] [1] [INFO] Using worker: gevent [2020-05-03 12:55:32 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-03 12:55:32 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-03 12:55:32 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-03 12:55:32 +0000] [41] [INFO] Booting worker with pid: 41 [2020-05-03:12:55:32:INFO] Model loaded successfully for worker : 38 [2020-05-03:12:55:33:INFO] Model loaded successfully for worker : 39 [2020-05-03:12:55:33:INFO] Model loaded successfully for worker : 40 [2020-05-03:12:55:33:INFO] Model loaded successfully for worker : 41 [2020-05-03:12:55:56:INFO] Sniff delimiter as ',' [2020-05-03:12:55:56:INFO] Sniff delimiter as ',' [2020-05-03:12:55:56:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:56:INFO] Sniff delimiter as ',' [2020-05-03:12:55:56:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:56:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:56:INFO] Sniff delimiter as ',' [2020-05-03:12:55:56:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:56:INFO] Sniff delimiter as ',' [2020-05-03:12:55:56:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:56:INFO] Sniff delimiter as ',' [2020-05-03:12:55:56:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:56:INFO] Sniff delimiter as ',' [2020-05-03:12:55:56:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:56:INFO] Sniff delimiter as ',' [2020-05-03:12:55:56:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:58:INFO] Sniff delimiter as ',' [2020-05-03:12:55:58:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:58:INFO] Sniff delimiter as ',' [2020-05-03:12:55:58:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:58:INFO] Sniff delimiter as ',' [2020-05-03:12:55:58:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:58:INFO] Sniff delimiter as ',' [2020-05-03:12:55:58:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:58:INFO] Sniff delimiter as ',' [2020-05-03:12:55:58:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:58:INFO] Sniff delimiter as ',' [2020-05-03:12:55:58:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:58:INFO] Sniff delimiter as ',' [2020-05-03:12:55:58:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:55:58:INFO] Sniff delimiter as ',' [2020-05-03:12:55:58:INFO] Determined delimiter of CSV input is ',' 2020-05-03T12:55:53.435:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-03:12:56:01:INFO] Sniff delimiter as ',' [2020-05-03:12:56:01:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:01:INFO] Sniff delimiter as ',' [2020-05-03:12:56:01:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:01:INFO] Sniff delimiter as ',' [2020-05-03:12:56:01:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:01:INFO] Sniff delimiter as ',' [2020-05-03:12:56:01:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:01:INFO] Sniff delimiter as ',' [2020-05-03:12:56:01:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:01:INFO] Sniff delimiter as ',' [2020-05-03:12:56:01:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:03:INFO] Sniff delimiter as ',' [2020-05-03:12:56:03:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:03:INFO] Sniff delimiter as ',' [2020-05-03:12:56:03:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:03:INFO] Sniff delimiter as ',' [2020-05-03:12:56:03:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:03:INFO] Sniff delimiter as ',' [2020-05-03:12:56:03:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:03:INFO] Sniff delimiter as ',' [2020-05-03:12:56:03:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:03:INFO] Sniff delimiter as ',' [2020-05-03:12:56:03:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:03:INFO] Sniff delimiter as ',' [2020-05-03:12:56:03:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:03:INFO] Sniff delimiter as ',' [2020-05-03:12:56:03:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:06:INFO] Sniff delimiter as ',' [2020-05-03:12:56:06:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:06:INFO] Sniff delimiter as ',' [2020-05-03:12:56:06:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:06:INFO] Sniff delimiter as ',' [2020-05-03:12:56:06:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:06:INFO] Sniff delimiter as ',' [2020-05-03:12:56:06:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:06:INFO] Sniff delimiter as ',' [2020-05-03:12:56:06:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:06:INFO] Sniff delimiter as ',' [2020-05-03:12:56:06:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:06:INFO] Sniff delimiter as ',' [2020-05-03:12:56:06:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:06:INFO] Sniff delimiter as ',' [2020-05-03:12:56:06:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:08:INFO] Sniff delimiter as ',' [2020-05-03:12:56:08:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:08:INFO] Sniff delimiter as ',' [2020-05-03:12:56:08:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:08:INFO] Sniff delimiter as ',' [2020-05-03:12:56:08:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:08:INFO] Sniff delimiter as ',' [2020-05-03:12:56:08:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:08:INFO] Sniff delimiter as ',' [2020-05-03:12:56:08:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:08:INFO] Sniff delimiter as ',' [2020-05-03:12:56:08:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:08:INFO] Sniff delimiter as ',' [2020-05-03:12:56:08:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:08:INFO] Sniff delimiter as ',' [2020-05-03:12:56:08:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:11:INFO] Sniff delimiter as ',' [2020-05-03:12:56:11:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:11:INFO] Sniff delimiter as ',' [2020-05-03:12:56:11:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:11:INFO] Sniff delimiter as ',' [2020-05-03:12:56:11:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:11:INFO] Sniff delimiter as ',' [2020-05-03:12:56:11:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:11:INFO] Sniff delimiter as ',' [2020-05-03:12:56:11:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:11:INFO] Sniff delimiter as ',' [2020-05-03:12:56:11:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:13:INFO] Sniff delimiter as ',' [2020-05-03:12:56:13:INFO] Sniff delimiter as ',' [2020-05-03:12:56:13:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:13:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:13:INFO] Sniff delimiter as ',' [2020-05-03:12:56:13:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:13:INFO] Sniff delimiter as ',' [2020-05-03:12:56:13:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:13:INFO] Sniff delimiter as ',' [2020-05-03:12:56:13:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:13:INFO] Sniff delimiter as ',' [2020-05-03:12:56:13:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:13:INFO] Sniff delimiter as ',' [2020-05-03:12:56:13:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:13:INFO] Sniff delimiter as ',' [2020-05-03:12:56:13:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:15:INFO] Sniff delimiter as ',' [2020-05-03:12:56:15:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:15:INFO] Sniff delimiter as ',' [2020-05-03:12:56:15:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:16:INFO] Sniff delimiter as ',' [2020-05-03:12:56:16:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:16:INFO] Sniff delimiter as ',' [2020-05-03:12:56:16:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:15:INFO] Sniff delimiter as ',' [2020-05-03:12:56:15:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:15:INFO] Sniff delimiter as ',' [2020-05-03:12:56:15:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:16:INFO] Sniff delimiter as ',' [2020-05-03:12:56:16:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:16:INFO] Sniff delimiter as ',' [2020-05-03:12:56:16:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:18:INFO] Sniff delimiter as ',' [2020-05-03:12:56:18:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:18:INFO] Sniff delimiter as ',' [2020-05-03:12:56:18:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:18:INFO] Sniff delimiter as ',' [2020-05-03:12:56:18:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:18:INFO] Sniff delimiter as ',' [2020-05-03:12:56:18:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:18:INFO] Sniff delimiter as ',' [2020-05-03:12:56:18:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:18:INFO] Sniff delimiter as ',' [2020-05-03:12:56:18:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:18:INFO] Sniff delimiter as ',' [2020-05-03:12:56:18:INFO] Determined delimiter of CSV input is ',' [2020-05-03:12:56:18:INFO] Sniff delimiter as ',' [2020-05-03:12:56:18:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code print(xgb_transformer.output_path) print(data_dir) !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.4 KiB (4.2 MiB/s) with 1 file(s) remaining Completed 369.4 KiB/369.4 KiB (5.9 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-west-2-463062221782/xgboost-2020-05-03-12-52-32-869/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output -------------! ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['idiot', 'dentist', 'find', 'wife', 'unfaith', 'new', 'stori', 'line', 'howev', 'author', 'manag', 'creat', 'stupid', 'disgust', 'film', 'enjoy', 'watch', 'kid', 'vomit', 'see', 'dentist', 'imagin', 'pull', 'wife', 'teeth', 'bloodi', 'horror', 'type', 'go', 'see', 'rent', 'film', 'move', 'someth', 'els', 'fair', 'ladi', 'anyon'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code orig_only = original_vocabulary - new_vocabulary print(orig_only) ###Output {'spill', 'victorian', 'ghetto', 'playboy', '21st', 'weari', 'reincarn'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code new_only = new_vocabulary - original_vocabulary print(new_only) ###Output {'dubiou', 'optimist', 'banana', 'sophi', 'masterson', 'orchestr', 'omin'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. ###Code [(x,vocabulary[x]) for x in orig_only] [(x, new_vectorizer.vocabulary_[x]) for x in new_only] original_vocabulary = new_vocabulary = new_only = orig_only = None ###Output _____no_output_____ ###Markdown (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) new_XV = None pd.concat([new_val_y, new_val_X], axis=1, copy=False).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) new_val_y = new_val_X = None pd.concat([new_train_y, new_train_X], axis=1, copy=False).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) new_train_y = new_train_X = None ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-05-03 14:52:24 Starting - Starting the training job... 2020-05-03 14:52:27 Starting - Launching requested ML instances...... 2020-05-03 14:53:50 Starting - Preparing the instances for training......... 2020-05-03 14:55:15 Downloading - Downloading input data... 2020-05-03 14:55:45 Training - Training image download completed. Training in progress.Arguments: train [2020-05-03:14:55:46:INFO] Running standalone xgboost training. [2020-05-03:14:55:46:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8489.04mb [2020-05-03:14:55:46:INFO] Determined delimiter of CSV input is ',' [14:55:46] S3DistributionType set as FullyReplicated [14:55:48] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-05-03:14:55:48:INFO] Determined delimiter of CSV input is ',' [14:55:48] S3DistributionType set as FullyReplicated [14:55:49] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [14:55:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.3142#011validation-error:0.3219 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [14:55:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 4 pruned nodes, max_depth=5 [1]#011train-error:0.297267#011validation-error:0.306 [14:55:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.2816#011validation-error:0.2895 [14:55:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 0 pruned nodes, max_depth=5 [3]#011train-error:0.273#011validation-error:0.2806 [14:55:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [4]#011train-error:0.269733#011validation-error:0.2791 [14:55:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.265467#011validation-error:0.2769 [14:56:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.259267#011validation-error:0.271 [14:56:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.258333#011validation-error:0.2684 [14:56:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 2 pruned nodes, max_depth=5 [8]#011train-error:0.2494#011validation-error:0.2593 [14:56:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [9]#011train-error:0.241933#011validation-error:0.253 [14:56:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [10]#011train-error:0.2324#011validation-error:0.2494 [14:56:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [11]#011train-error:0.2256#011validation-error:0.2434 [14:56:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [12]#011train-error:0.219333#011validation-error:0.2405 [14:56:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [13]#011train-error:0.2142#011validation-error:0.2375 [14:56:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [14]#011train-error:0.211533#011validation-error:0.2331 [14:56:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.2092#011validation-error:0.2296 [14:56:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.204467#011validation-error:0.2269 [14:56:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 0 pruned nodes, max_depth=5 [17]#011train-error:0.201533#011validation-error:0.2251 [14:56:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [18]#011train-error:0.198467#011validation-error:0.2232 [14:56:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [19]#011train-error:0.1958#011validation-error:0.2186 [14:56:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [20]#011train-error:0.193333#011validation-error:0.2174 [14:56:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.1896#011validation-error:0.2157 [14:56:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [22]#011train-error:0.186133#011validation-error:0.2137 [14:56:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [23]#011train-error:0.184867#011validation-error:0.2116 [14:56:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [24]#011train-error:0.182867#011validation-error:0.2087 [14:56:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [25]#011train-error:0.180333#011validation-error:0.2065 [14:56:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [26]#011train-error:0.179#011validation-error:0.2047 [14:56:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [27]#011train-error:0.1778#011validation-error:0.2034 [14:56:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [28]#011train-error:0.175133#011validation-error:0.2028 [14:56:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [29]#011train-error:0.1728#011validation-error:0.2015 [14:56:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.171467#011validation-error:0.2006 [14:56:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [31]#011train-error:0.169333#011validation-error:0.1987 [14:56:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [32]#011train-error:0.167933#011validation-error:0.1986 [14:56:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [33]#011train-error:0.1654#011validation-error:0.1968 [14:56:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [34]#011train-error:0.165533#011validation-error:0.1953 [14:56:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [35]#011train-error:0.163067#011validation-error:0.1941 [14:56:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.1614#011validation-error:0.1935 [14:56:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [37]#011train-error:0.160933#011validation-error:0.1926 [14:56:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.159933#011validation-error:0.191 [14:56:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [39]#011train-error:0.158267#011validation-error:0.1896 [14:56:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [40]#011train-error:0.158267#011validation-error:0.1907 [14:56:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [41]#011train-error:0.156867#011validation-error:0.1914 [14:56:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [42]#011train-error:0.156733#011validation-error:0.1901 [14:56:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5 [43]#011train-error:0.156133#011validation-error:0.1898 [14:56:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 16 pruned nodes, max_depth=5 [44]#011train-error:0.154533#011validation-error:0.1909 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ...................Arguments: serve [2020-05-03 15:01:44 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-03 15:01:44 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-03 15:01:44 +0000] [1] [INFO] Using worker: gevent [2020-05-03 15:01:44 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-03 15:01:44 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-03 15:01:44 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-03:15:01:44:INFO] Model loaded successfully for worker : 38 [2020-05-03 15:01:44 +0000] [41] [INFO] Booting worker with pid: 41 [2020-05-03:15:01:44:INFO] Model loaded successfully for worker : 40 [2020-05-03:15:01:44:INFO] Model loaded successfully for worker : 39 [2020-05-03:15:01:44:INFO] Model loaded successfully for worker : 41 [2020-05-03:15:02:12:INFO] Sniff delimiter as ',' [2020-05-03:15:02:12:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:12:INFO] Sniff delimiter as ',' [2020-05-03:15:02:12:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:12:INFO] Sniff delimiter as ',' [2020-05-03:15:02:12:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:12:INFO] Sniff delimiter as ',' [2020-05-03:15:02:12:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:12:INFO] Sniff delimiter as ',' [2020-05-03:15:02:12:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:12:INFO] Sniff delimiter as ',' [2020-05-03:15:02:12:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:12:INFO] Sniff delimiter as ',' [2020-05-03:15:02:12:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:12:INFO] Sniff delimiter as ',' [2020-05-03:15:02:12:INFO] Determined delimiter of CSV input is ',' 2020-05-03T15:02:09.309:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-03:15:02:14:INFO] Sniff delimiter as ',' [2020-05-03:15:02:14:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:14:INFO] Sniff delimiter as ',' [2020-05-03:15:02:14:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:14:INFO] Sniff delimiter as ',' [2020-05-03:15:02:14:INFO] Sniff delimiter as ',' [2020-05-03:15:02:14:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:14:INFO] Sniff delimiter as ',' [2020-05-03:15:02:14:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:14:INFO] Sniff delimiter as ',' [2020-05-03:15:02:14:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:14:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:16:INFO] Sniff delimiter as ',' [2020-05-03:15:02:16:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:16:INFO] Sniff delimiter as ',' [2020-05-03:15:02:16:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:17:INFO] Sniff delimiter as ',' [2020-05-03:15:02:17:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:17:INFO] Sniff delimiter as ',' [2020-05-03:15:02:17:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:17:INFO] Sniff delimiter as ',' [2020-05-03:15:02:17:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:17:INFO] Sniff delimiter as ',' [2020-05-03:15:02:17:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:17:INFO] Sniff delimiter as ',' [2020-05-03:15:02:17:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:17:INFO] Sniff delimiter as ',' [2020-05-03:15:02:17:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:19:INFO] Sniff delimiter as ',' [2020-05-03:15:02:19:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:20:INFO] Sniff delimiter as ',' [2020-05-03:15:02:20:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:20:INFO] Sniff delimiter as ',' [2020-05-03:15:02:20:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:20:INFO] Sniff delimiter as ',' [2020-05-03:15:02:20:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:19:INFO] Sniff delimiter as ',' [2020-05-03:15:02:19:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:20:INFO] Sniff delimiter as ',' [2020-05-03:15:02:20:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:20:INFO] Sniff delimiter as ',' [2020-05-03:15:02:20:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:20:INFO] Sniff delimiter as ',' [2020-05-03:15:02:20:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:22:INFO] Sniff delimiter as ',' [2020-05-03:15:02:22:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:22:INFO] Sniff delimiter as ',' [2020-05-03:15:02:22:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:22:INFO] Sniff delimiter as ',' [2020-05-03:15:02:22:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:22:INFO] Sniff delimiter as ',' [2020-05-03:15:02:22:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:22:INFO] Sniff delimiter as ',' [2020-05-03:15:02:22:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:22:INFO] Sniff delimiter as ',' [2020-05-03:15:02:22:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:22:INFO] Sniff delimiter as ',' [2020-05-03:15:02:22:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:22:INFO] Sniff delimiter as ',' [2020-05-03:15:02:22:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:24:INFO] Sniff delimiter as ',' [2020-05-03:15:02:24:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:24:INFO] Sniff delimiter as ',' [2020-05-03:15:02:24:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:24:INFO] Sniff delimiter as ',' [2020-05-03:15:02:24:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:24:INFO] Sniff delimiter as ',' [2020-05-03:15:02:24:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:25:INFO] Sniff delimiter as ',' [2020-05-03:15:02:25:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:25:INFO] Sniff delimiter as ',' [2020-05-03:15:02:25:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:25:INFO] Sniff delimiter as ',' [2020-05-03:15:02:25:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:25:INFO] Sniff delimiter as ',' [2020-05-03:15:02:25:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:27:INFO] Sniff delimiter as ',' [2020-05-03:15:02:27:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:27:INFO] Sniff delimiter as ',' [2020-05-03:15:02:27:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:27:INFO] Sniff delimiter as ',' [2020-05-03:15:02:27:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:27:INFO] Sniff delimiter as ',' [2020-05-03:15:02:27:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:29:INFO] Sniff delimiter as ',' [2020-05-03:15:02:29:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:29:INFO] Sniff delimiter as ',' [2020-05-03:15:02:29:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:29:INFO] Sniff delimiter as ',' [2020-05-03:15:02:29:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:29:INFO] Sniff delimiter as ',' [2020-05-03:15:02:29:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:29:INFO] Sniff delimiter as ',' [2020-05-03:15:02:29:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:29:INFO] Sniff delimiter as ',' [2020-05-03:15:02:29:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:30:INFO] Sniff delimiter as ',' [2020-05-03:15:02:30:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:30:INFO] Sniff delimiter as ',' [2020-05-03:15:02:30:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:31:INFO] Sniff delimiter as ',' [2020-05-03:15:02:31:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:31:INFO] Sniff delimiter as ',' [2020-05-03:15:02:31:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:32:INFO] Sniff delimiter as ',' [2020-05-03:15:02:32:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:32:INFO] Sniff delimiter as ',' [2020-05-03:15:02:32:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:32:INFO] Sniff delimiter as ',' [2020-05-03:15:02:32:INFO] Sniff delimiter as ',' [2020-05-03:15:02:32:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:32:INFO] Sniff delimiter as ',' [2020-05-03:15:02:32:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:32:INFO] Sniff delimiter as ',' [2020-05-03:15:02:32:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:32:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:34:INFO] Sniff delimiter as ',' [2020-05-03:15:02:34:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:34:INFO] Sniff delimiter as ',' [2020-05-03:15:02:34:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:34:INFO] Sniff delimiter as ',' [2020-05-03:15:02:34:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:34:INFO] Sniff delimiter as ',' [2020-05-03:15:02:34:INFO] Sniff delimiter as ',' [2020-05-03:15:02:34:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:34:INFO] Sniff delimiter as ',' [2020-05-03:15:02:34:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:34:INFO] Sniff delimiter as ',' [2020-05-03:15:02:34:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:34:INFO] Sniff delimiter as ',' [2020-05-03:15:02:34:INFO] Determined delimiter of CSV input is ',' [2020-05-03:15:02:34:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.4 KiB (4.3 MiB/s) with 1 file(s) remaining Completed 366.4 KiB/366.4 KiB (6.1 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-west-2-463062221782/xgboost-2020-05-03-14-58-38-270/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgb-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output -------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. ###Code %%capture # Make sure that we use SageMaker 1.x !pip install sagemaker==1.72.0 # For sending text messages !pip install twilio import sys sys.path.insert(0, '..') from twilio_helper import twilio_helper notebook_name = 'IMDB Sentiment Analysis - XGBoost (Updating a Model).ipynb' twilio_helper.send_text_message(f'{notebook_name}: test') ###Output Text message sent: SM3edc21da995842df88f030851c48a0a8 ###Markdown Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code # %mkdir ../data # !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz # !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words print(review_to_words(train_X[100])) import os import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer ### joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays # from sklearn.externals import joblib import joblib def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) twilio_helper.send_text_message(f'{notebook_name}: BoW extracting finished') len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost', '1.0-1') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) twilio_helper.send_text_message(f'{notebook_name}: model training finished') ###Output 2021-04-20 16:36:32 Starting - Starting the training job... 2021-04-20 16:36:39 Starting - Launching requested ML instances...... 2021-04-20 16:37:51 Starting - Preparing the instances for training......... 2021-04-20 16:39:32 Downloading - Downloading input data 2021-04-20 16:39:32 Training - Downloading the training image...... 2021-04-20 16:40:12 Training - Training image download completed. Training in progress.INFO:sagemaker-containers:Imported framework sagemaker_xgboost_container.training INFO:sagemaker-containers:Failed to parse hyperparameter objective value binary:logistic to Json. Returning the value itself INFO:sagemaker-containers:No GPUs detected (normal if no gpus installed) INFO:sagemaker_xgboost_container.training:Running XGBoost Sagemaker in algorithm mode INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' [16:40:18] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, INFO:root:Determined delimiter of CSV input is ',' [16:40:20] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, INFO:root:Single node training. INFO:root:Train matrix has 15000 rows INFO:root:Validation matrix has 10000 rows [16:40:20] WARNING: /workspace/src/learner.cc:328:  Parameters: { early_stopping_rounds, num_round, silent } might not be used. This may not be accurate due to some parameters are only used in language bindings but passed down to XGBoost core. Or some parameters are not used but slip through this verification. Please open an issue if you find above cases.  [0]#011train-error:0.29027#011validation-error:0.31110 [1]#011train-error:0.27647#011validation-error:0.29530 [2]#011train-error:0.27300#011validation-error:0.29020 [3]#011train-error:0.26193#011validation-error:0.28040 [4]#011train-error:0.25960#011validation-error:0.27870 [5]#011train-error:0.25233#011validation-error:0.27350 [6]#011train-error:0.24080#011validation-error:0.26370 [7]#011train-error:0.23587#011validation-error:0.25640 [8]#011train-error:0.22560#011validation-error:0.24670 [9]#011train-error:0.22633#011validation-error:0.24960 [10]#011train-error:0.22147#011validation-error:0.24520 [11]#011train-error:0.21720#011validation-error:0.24110 [12]#011train-error:0.21267#011validation-error:0.23760 [13]#011train-error:0.20920#011validation-error:0.23390 [14]#011train-error:0.20500#011validation-error:0.22710 [15]#011train-error:0.19873#011validation-error:0.22090 [16]#011train-error:0.19847#011validation-error:0.21990 [17]#011train-error:0.19413#011validation-error:0.21590 [18]#011train-error:0.19047#011validation-error:0.21370 [19]#011train-error:0.18640#011validation-error:0.21130 [20]#011train-error:0.18367#011validation-error:0.20750 [21]#011train-error:0.18060#011validation-error:0.20730 [22]#011train-error:0.17860#011validation-error:0.20350 [23]#011train-error:0.17640#011validation-error:0.20110 [24]#011train-error:0.17407#011validation-error:0.20070 [25]#011train-error:0.17173#011validation-error:0.19900 [26]#011train-error:0.16793#011validation-error:0.19780 [27]#011train-error:0.16647#011validation-error:0.19520 [28]#011train-error:0.16447#011validation-error:0.19400 [29]#011train-error:0.16227#011validation-error:0.19350 [30]#011train-error:0.16080#011validation-error:0.19200 [31]#011train-error:0.15927#011validation-error:0.18990 [32]#011train-error:0.15747#011validation-error:0.18870 [33]#011train-error:0.15727#011validation-error:0.18840 [34]#011train-error:0.15613#011validation-error:0.18790 [35]#011train-error:0.15440#011validation-error:0.18810 [36]#011train-error:0.15353#011validation-error:0.18790 [37]#011train-error:0.15067#011validation-error:0.18490 [38]#011train-error:0.14813#011validation-error:0.18430 [39]#011train-error:0.14747#011validation-error:0.18280 [40]#011train-error:0.14760#011validation-error:0.18300 [41]#011train-error:0.14593#011validation-error:0.18230 [42]#011train-error:0.14460#011validation-error:0.18030 [43]#011train-error:0.14347#011validation-error:0.18010 [44]#011train-error:0.14140#011validation-error:0.17990 [45]#011train-error:0.14113#011validation-error:0.17920 [46]#011train-error:0.14093#011validation-error:0.17940 [47]#011train-error:0.13973#011validation-error:0.17880 [48]#011train-error:0.13807#011validation-error:0.17740 [49]#011train-error:0.13700#011validation-error:0.17610 [50]#011train-error:0.13660#011validation-error:0.17520 [51]#011train-error:0.13613#011validation-error:0.17420 [52]#011train-error:0.13427#011validation-error:0.17400 [53]#011train-error:0.13387#011validation-error:0.17370 [54]#011train-error:0.13393#011validation-error:0.17240 [55]#011train-error:0.13347#011validation-error:0.17030 [56]#011train-error:0.13267#011validation-error:0.16930 [57]#011train-error:0.13260#011validation-error:0.16960 [58]#011train-error:0.13220#011validation-error:0.16830 [59]#011train-error:0.13060#011validation-error:0.16780 [60]#011train-error:0.12887#011validation-error:0.16780 [61]#011train-error:0.12807#011validation-error:0.16630 [62]#011train-error:0.12727#011validation-error:0.16600 [63]#011train-error:0.12653#011validation-error:0.16430 [64]#011train-error:0.12593#011validation-error:0.16360 [65]#011train-error:0.12600#011validation-error:0.16340 [66]#011train-error:0.12587#011validation-error:0.16350 [67]#011train-error:0.12553#011validation-error:0.16290 [68]#011train-error:0.12527#011validation-error:0.16200 [69]#011train-error:0.12500#011validation-error:0.16230 [70]#011train-error:0.12380#011validation-error:0.16330 [71]#011train-error:0.12333#011validation-error:0.16330 [72]#011train-error:0.12253#011validation-error:0.16270 [73]#011train-error:0.12187#011validation-error:0.16170 [74]#011train-error:0.12113#011validation-error:0.16150 [75]#011train-error:0.12073#011validation-error:0.16060 [76]#011train-error:0.11927#011validation-error:0.16030 [77]#011train-error:0.11860#011validation-error:0.16110 [78]#011train-error:0.11840#011validation-error:0.16080 [79]#011train-error:0.11793#011validation-error:0.16000 [80]#011train-error:0.11700#011validation-error:0.16060 [81]#011train-error:0.11727#011validation-error:0.15950 [82]#011train-error:0.11727#011validation-error:0.15960 [83]#011train-error:0.11520#011validation-error:0.15870 [84]#011train-error:0.11487#011validation-error:0.15820 [85]#011train-error:0.11460#011validation-error:0.15830 [86]#011train-error:0.11373#011validation-error:0.15830 [87]#011train-error:0.11220#011validation-error:0.15710 [88]#011train-error:0.11187#011validation-error:0.15610 [89]#011train-error:0.11133#011validation-error:0.15640 [90]#011train-error:0.11093#011validation-error:0.15600 [91]#011train-error:0.11040#011validation-error:0.15620 [92]#011train-error:0.10953#011validation-error:0.15570 [93]#011train-error:0.10973#011validation-error:0.15520 [94]#011train-error:0.10913#011validation-error:0.15520 [95]#011train-error:0.10900#011validation-error:0.15540 [96]#011train-error:0.10820#011validation-error:0.15550 [97]#011train-error:0.10787#011validation-error:0.15550 [98]#011train-error:0.10740#011validation-error:0.15520 [99]#011train-error:0.10740#011validation-error:0.15450 [100]#011train-error:0.10733#011validation-error:0.15520 [101]#011train-error:0.10740#011validation-error:0.15460 [102]#011train-error:0.10680#011validation-error:0.15450 [103]#011train-error:0.10647#011validation-error:0.15320 [104]#011train-error:0.10647#011validation-error:0.15260 [105]#011train-error:0.10567#011validation-error:0.15220 [106]#011train-error:0.10520#011validation-error:0.15160 [107]#011train-error:0.10493#011validation-error:0.15140 [108]#011train-error:0.10513#011validation-error:0.15130 [109]#011train-error:0.10507#011validation-error:0.15170 [110]#011train-error:0.10353#011validation-error:0.15190 [111]#011train-error:0.10353#011validation-error:0.15170 [112]#011train-error:0.10320#011validation-error:0.15120 [113]#011train-error:0.10273#011validation-error:0.15150 [114]#011train-error:0.10220#011validation-error:0.15150 [115]#011train-error:0.10200#011validation-error:0.15090 [116]#011train-error:0.10160#011validation-error:0.15140 [117]#011train-error:0.10133#011validation-error:0.15090 [118]#011train-error:0.10060#011validation-error:0.14960 [119]#011train-error:0.10000#011validation-error:0.14950 [120]#011train-error:0.10033#011validation-error:0.14960 [121]#011train-error:0.09967#011validation-error:0.14900 [122]#011train-error:0.09953#011validation-error:0.14920 [123]#011train-error:0.09847#011validation-error:0.14850 [124]#011train-error:0.09793#011validation-error:0.14830 [125]#011train-error:0.09727#011validation-error:0.14750 [126]#011train-error:0.09760#011validation-error:0.14730 [127]#011train-error:0.09727#011validation-error:0.14680 [128]#011train-error:0.09707#011validation-error:0.14650 [129]#011train-error:0.09627#011validation-error:0.14600 [130]#011train-error:0.09633#011validation-error:0.14580 [131]#011train-error:0.09573#011validation-error:0.14590 [132]#011train-error:0.09453#011validation-error:0.14620 [133]#011train-error:0.09473#011validation-error:0.14510 [134]#011train-error:0.09413#011validation-error:0.14580 [135]#011train-error:0.09413#011validation-error:0.14610 [136]#011train-error:0.09353#011validation-error:0.14630 [137]#011train-error:0.09280#011validation-error:0.14610 [138]#011train-error:0.09287#011validation-error:0.14590 [139]#011train-error:0.09247#011validation-error:0.14550 [140]#011train-error:0.09220#011validation-error:0.14590 [141]#011train-error:0.09213#011validation-error:0.14570 [142]#011train-error:0.09140#011validation-error:0.14590 [143]#011train-error:0.09207#011validation-error:0.14560 2021-04-20 16:43:23 Uploading - Uploading generated training model 2021-04-20 16:43:23 Completed - Training job completed Training seconds: 251 Billable seconds: 251 Text message sent: SMde3c0617c35942fa94375cbc1cc923a9 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() twilio_helper.send_text_message(f'{notebook_name}: model testing finished') ###Output ..........................................[2021-04-20:16:50:32:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:50:32:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:50:32:INFO] nginx config:  worker_processes auto; daemon off; pid /tmp/nginx.pid; error_log /dev/stderr;  worker_rlimit_nofile 4096;  events { worker_connections 2048; [2021-04-20:16:50:32:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:50:32:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:50:32:INFO] nginx config:  worker_processes auto; daemon off; pid /tmp/nginx.pid; error_log /dev/stderr;  worker_rlimit_nofile 4096;  events { worker_connections 2048; }  http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /dev/stdout combined; upstream gunicorn { server unix:/tmp/gunicorn.sock; } server { listen 8080 deferred; client_max_body_size 0; keepalive_timeout 3; location ~ ^/(ping|invocations|execution-parameters) { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 60s; proxy_pass http://gunicorn; } location / { return 404 "{}"; } } }  [2021-04-20 16:50:32 +0000] [17] [INFO] Starting gunicorn 19.10.0 [2021-04-20 16:50:32 +0000] [17] [INFO] Listening at: unix:/tmp/gunicorn.sock (17) [2021-04-20 16:50:32 +0000] [17] [INFO] Using worker: gevent [2021-04-20 16:50:32 +0000] [24] [INFO] Booting worker with pid: 24 }  http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /dev/stdout combined; upstream gunicorn { server unix:/tmp/gunicorn.sock; } server { listen 8080 deferred; client_max_body_size 0; keepalive_timeout 3; location ~ ^/(ping|invocations|execution-parameters) { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 60s; proxy_pass http://gunicorn; } location / { return 404 "{}"; } } }  [2021-04-20 16:50:32 +0000] [17] [INFO] Starting gunicorn 19.10.0 [2021-04-20 16:50:32 +0000] [17] [INFO] Listening at: unix:/tmp/gunicorn.sock (17) [2021-04-20 16:50:32 +0000] [17] [INFO] Using worker: gevent [2021-04-20 16:50:32 +0000] [24] [INFO] Booting worker with pid: 24 [2021-04-20 16:50:32 +0000] [25] [INFO] Booting worker with pid: 25 [2021-04-20 16:50:32 +0000] [26] [INFO] Booting worker with pid: 26 [2021-04-20 16:50:32 +0000] [27] [INFO] Booting worker with pid: 27 [2021-04-20 16:50:32 +0000] [25] [INFO] Booting worker with pid: 25 [2021-04-20 16:50:32 +0000] [26] [INFO] Booting worker with pid: 26 [2021-04-20 16:50:32 +0000] [27] [INFO] Booting worker with pid: 27 [2021-04-20:16:50:38:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [20/Apr/2021:16:50:38 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" [2021-04-20:16:50:38:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [20/Apr/2021:16:50:38 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2021-04-20:16:50:38:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [20/Apr/2021:16:50:38 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" [2021-04-20:16:50:38:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [20/Apr/2021:16:50:38 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" 2021-04-20T16:50:38.559:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2021-04-20:16:50:42:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:50:42:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:50:42:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:42:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:42:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:50:42:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:42:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:42:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:42:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:42:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:50:42:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:42:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:45 +0000] "POST /invocations HTTP/1.1" 200 12189 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:45 +0000] "POST /invocations HTTP/1.1" 200 12223 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:45 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" [2021-04-20:16:50:45:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:45 +0000] "POST /invocations HTTP/1.1" 200 12184 "-" "Go-http-client/1.1" [2021-04-20:16:50:45:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:45:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:45:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:45 +0000] "POST /invocations HTTP/1.1" 200 12189 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:45 +0000] "POST /invocations HTTP/1.1" 200 12223 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:45 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" [2021-04-20:16:50:45:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:45 +0000] "POST /invocations HTTP/1.1" 200 12184 "-" "Go-http-client/1.1" [2021-04-20:16:50:45:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:45:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:45:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:51 +0000] "POST /invocations HTTP/1.1" 200 12244 "-" "Go-http-client/1.1" [2021-04-20:16:50:51:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:51 +0000] "POST /invocations HTTP/1.1" 200 12199 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:51 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:51 +0000] "POST /invocations HTTP/1.1" 200 12207 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:51 +0000] "POST /invocations HTTP/1.1" 200 12244 "-" "Go-http-client/1.1" [2021-04-20:16:50:51:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:51 +0000] "POST /invocations HTTP/1.1" 200 12199 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:51 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:51 +0000] "POST /invocations HTTP/1.1" 200 12207 "-" "Go-http-client/1.1" [2021-04-20:16:50:52:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:52:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:52:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:52:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:52:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:52:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:54 +0000] "POST /invocations HTTP/1.1" 200 12172 "-" "Go-http-client/1.1" [2021-04-20:16:50:54:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:55 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:54 +0000] "POST /invocations HTTP/1.1" 200 12172 "-" "Go-http-client/1.1" [2021-04-20:16:50:54:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:55 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:55 +0000] "POST /invocations HTTP/1.1" 200 12210 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:55 +0000] "POST /invocations HTTP/1.1" 200 12192 "-" "Go-http-client/1.1" [2021-04-20:16:50:55:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:55:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:55:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:55 +0000] "POST /invocations HTTP/1.1" 200 12210 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:55 +0000] "POST /invocations HTTP/1.1" 200 12192 "-" "Go-http-client/1.1" [2021-04-20:16:50:55:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:55:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:55:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:57 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:57 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" [2021-04-20:16:50:58:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:58 +0000] "POST /invocations HTTP/1.1" 200 12236 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:58 +0000] "POST /invocations HTTP/1.1" 200 12181 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:58 +0000] "POST /invocations HTTP/1.1" 200 12234 "-" "Go-http-client/1.1" [2021-04-20:16:50:58:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:58:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:58:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:58:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:50:58 +0000] "POST /invocations HTTP/1.1" 200 12236 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:58 +0000] "POST /invocations HTTP/1.1" 200 12181 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:50:58 +0000] "POST /invocations HTTP/1.1" 200 12234 "-" "Go-http-client/1.1" [2021-04-20:16:50:58:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:58:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:50:58:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:51:01:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:51:01:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:51:04 +0000] "POST /invocations HTTP/1.1" 200 12230 "-" "Go-http-client/1.1" [2021-04-20:16:51:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:51:04 +0000] "POST /invocations HTTP/1.1" 200 12205 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:04 +0000] "POST /invocations HTTP/1.1" 200 12163 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:04 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" [2021-04-20:16:51:04:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:51:04:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:51:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:51:04 +0000] "POST /invocations HTTP/1.1" 200 12230 "-" "Go-http-client/1.1" [2021-04-20:16:51:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:51:04 +0000] "POST /invocations HTTP/1.1" 200 12205 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:04 +0000] "POST /invocations HTTP/1.1" 200 12163 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:04 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" [2021-04-20:16:51:04:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:51:04:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:51:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:51:10 +0000] "POST /invocations HTTP/1.1" 200 12196 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:10 +0000] "POST /invocations HTTP/1.1" 200 12196 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:10 +0000] "POST /invocations HTTP/1.1" 200 12182 "-" "Go-http-client/1.1" [2021-04-20:16:51:10:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:51:10:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:51:10 +0000] "POST /invocations HTTP/1.1" 200 12182 "-" "Go-http-client/1.1" [2021-04-20:16:51:10:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:51:10:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:51:11 +0000] "POST /invocations HTTP/1.1" 200 12183 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:11 +0000] "POST /invocations HTTP/1.1" 200 12186 "-" "Go-http-client/1.1" [2021-04-20:16:51:11:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:51:11:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:51:11 +0000] "POST /invocations HTTP/1.1" 200 12183 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:11 +0000] "POST /invocations HTTP/1.1" 200 12186 "-" "Go-http-client/1.1" [2021-04-20:16:51:11:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:51:11:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:51:13 +0000] "POST /invocations HTTP/1.1" 200 9120 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:13 +0000] "POST /invocations HTTP/1.1" 200 12186 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:13 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:14 +0000] "POST /invocations HTTP/1.1" 200 12185 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:13 +0000] "POST /invocations HTTP/1.1" 200 9120 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:13 +0000] "POST /invocations HTTP/1.1" 200 12186 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:13 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:51:14 +0000] "POST /invocations HTTP/1.1" 200 12185 "-" "Go-http-client/1.1" Text message sent: SMf48a077db5f7416390600352d11c7417 ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-616964915547/sagemaker-xgboost-2021-04-20-16-43-47-627/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code import pandas as pd predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ### 0.85336 ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer( # max_features=5000, vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed # TODO: Transform our new data set and store the transformed data in the variable new_XV # new_XV = vectorizer.fit_transform(new_X).toarray() new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() twilio_helper.send_text_message(f'{notebook_name}: testing with new data finished') ###Output .................................[2021-04-20:16:57:22:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:57:22:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:57:22:INFO] nginx config:  worker_processes auto; daemon off; pid /tmp/nginx.pid; error_log /dev/stderr;  worker_rlimit_nofile 4096;  events { worker_connections 2048; }  http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /dev/stdout combined; upstream gunicorn { server unix:/tmp/gunicorn.sock; } server { listen 8080 deferred; client_max_body_size 0; keepalive_timeout 3; location ~ ^/(ping|invocations|execution-parameters) { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 60s; proxy_pass http://gunicorn; } location / { return 404 "{}"; } } }  [2021-04-20 16:57:22 +0000] [17] [INFO] Starting gunicorn 19.10.0 [2021-04-20 16:57:22 +0000] [17] [INFO] Listening at: unix:/tmp/gunicorn.sock (17) [2021-04-20 16:57:22 +0000] [17] [INFO] Using worker: gevent [2021-04-20 16:57:22 +0000] [24] [INFO] Booting worker with pid: 24 [2021-04-20 16:57:22 +0000] [25] [INFO] Booting worker with pid: 25 [2021-04-20 16:57:22 +0000] [26] [INFO] Booting worker with pid: 26 [2021-04-20 16:57:22 +0000] [27] [INFO] Booting worker with pid: 27 [2021-04-20:16:57:28:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:57:28:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [20/Apr/2021:16:57:28 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:28 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:28 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:28 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2021-04-20:16:57:32:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:57:32:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:32:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:57:32:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:32:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:57:32:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:32:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:32:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:57:32:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:32:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:57:32:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:32:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:32:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:16:57:32:INFO] Determined delimiter of CSV input is ',' 2021-04-20T16:57:28.722:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD 169.254.255.130 - - [20/Apr/2021:16:57:35 +0000] "POST /invocations HTTP/1.1" 200 12196 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:35 +0000] "POST /invocations HTTP/1.1" 200 12186 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:35 +0000] "POST /invocations HTTP/1.1" 200 12223 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:35 +0000] "POST /invocations HTTP/1.1" 200 12181 "-" "Go-http-client/1.1" [2021-04-20:16:57:35:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:35:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:35:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:36:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:35 +0000] "POST /invocations HTTP/1.1" 200 12196 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:35 +0000] "POST /invocations HTTP/1.1" 200 12186 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:35 +0000] "POST /invocations HTTP/1.1" 200 12223 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:35 +0000] "POST /invocations HTTP/1.1" 200 12181 "-" "Go-http-client/1.1" [2021-04-20:16:57:35:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:35:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:35:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:36:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:38 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:38 +0000] "POST /invocations HTTP/1.1" 200 12165 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:38 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:38 +0000] "POST /invocations HTTP/1.1" 200 12165 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:38 +0000] "POST /invocations HTTP/1.1" 200 12199 "-" "Go-http-client/1.1" [2021-04-20:16:57:39:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:39 +0000] "POST /invocations HTTP/1.1" 200 12193 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:38 +0000] "POST /invocations HTTP/1.1" 200 12199 "-" "Go-http-client/1.1" [2021-04-20:16:57:39:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:39 +0000] "POST /invocations HTTP/1.1" 200 12193 "-" "Go-http-client/1.1" [2021-04-20:16:57:39:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:39:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:39:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:39:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:41:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:41:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:41 +0000] "POST /invocations HTTP/1.1" 200 12190 "-" "Go-http-client/1.1" [2021-04-20:16:57:41:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:41:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:41 +0000] "POST /invocations HTTP/1.1" 200 12190 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:42 +0000] "POST /invocations HTTP/1.1" 200 12237 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:42 +0000] "POST /invocations HTTP/1.1" 200 12237 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:43 +0000] "POST /invocations HTTP/1.1" 200 12209 "-" "Go-http-client/1.1" [2021-04-20:16:57:43:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:43:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:43:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:43:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:43 +0000] "POST /invocations HTTP/1.1" 200 12209 "-" "Go-http-client/1.1" [2021-04-20:16:57:43:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:43:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:43:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:43:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:46:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:46:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:46:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:46:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:46:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:46:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:49 +0000] "POST /invocations HTTP/1.1" 200 12246 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:49 +0000] "POST /invocations HTTP/1.1" 200 12173 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:49 +0000] "POST /invocations HTTP/1.1" 200 12169 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:49 +0000] "POST /invocations HTTP/1.1" 200 12176 "-" "Go-http-client/1.1" [2021-04-20:16:57:49:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:49:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:49:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:49:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:49 +0000] "POST /invocations HTTP/1.1" 200 12246 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:49 +0000] "POST /invocations HTTP/1.1" 200 12173 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:49 +0000] "POST /invocations HTTP/1.1" 200 12169 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:49 +0000] "POST /invocations HTTP/1.1" 200 12176 "-" "Go-http-client/1.1" [2021-04-20:16:57:49:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:49:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:49:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:49:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:52 +0000] "POST /invocations HTTP/1.1" 200 12231 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:52 +0000] "POST /invocations HTTP/1.1" 200 12231 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:52 +0000] "POST /invocations HTTP/1.1" 200 12212 "-" "Go-http-client/1.1" [2021-04-20:16:57:52:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:52 +0000] "POST /invocations HTTP/1.1" 200 12218 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:52 +0000] "POST /invocations HTTP/1.1" 200 12214 "-" "Go-http-client/1.1" [2021-04-20:16:57:52:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:52:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:52 +0000] "POST /invocations HTTP/1.1" 200 12212 "-" "Go-http-client/1.1" [2021-04-20:16:57:52:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:52 +0000] "POST /invocations HTTP/1.1" 200 12218 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:52 +0000] "POST /invocations HTTP/1.1" 200 12214 "-" "Go-http-client/1.1" [2021-04-20:16:57:52:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:52:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:53:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:53:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:55 +0000] "POST /invocations HTTP/1.1" 200 12207 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:55 +0000] "POST /invocations HTTP/1.1" 200 12199 "-" "Go-http-client/1.1" [2021-04-20:16:57:55:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:55 +0000] "POST /invocations HTTP/1.1" 200 12175 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:56 +0000] "POST /invocations HTTP/1.1" 200 12206 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:55 +0000] "POST /invocations HTTP/1.1" 200 12207 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:55 +0000] "POST /invocations HTTP/1.1" 200 12199 "-" "Go-http-client/1.1" [2021-04-20:16:57:55:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:55 +0000] "POST /invocations HTTP/1.1" 200 12175 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:56 +0000] "POST /invocations HTTP/1.1" 200 12206 "-" "Go-http-client/1.1" [2021-04-20:16:57:56:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:56:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:56:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:56:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:56:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:56:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:59 +0000] "POST /invocations HTTP/1.1" 200 12217 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:59 +0000] "POST /invocations HTTP/1.1" 200 12176 "-" "Go-http-client/1.1" [2021-04-20:16:57:59:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:59:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:59:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:57:59 +0000] "POST /invocations HTTP/1.1" 200 12217 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:57:59 +0000] "POST /invocations HTTP/1.1" 200 12176 "-" "Go-http-client/1.1" [2021-04-20:16:57:59:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:59:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:57:59:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:58:02 +0000] "POST /invocations HTTP/1.1" 200 12205 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:58:02 +0000] "POST /invocations HTTP/1.1" 200 12189 "-" "Go-http-client/1.1" [2021-04-20:16:58:02:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:58:02 +0000] "POST /invocations HTTP/1.1" 200 12223 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:58:02 +0000] "POST /invocations HTTP/1.1" 200 12215 "-" "Go-http-client/1.1" [2021-04-20:16:58:02:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:58:02:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:58:02 +0000] "POST /invocations HTTP/1.1" 200 12205 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:58:02 +0000] "POST /invocations HTTP/1.1" 200 12189 "-" "Go-http-client/1.1" [2021-04-20:16:58:02:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:58:02 +0000] "POST /invocations HTTP/1.1" 200 12223 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:58:02 +0000] "POST /invocations HTTP/1.1" 200 12215 "-" "Go-http-client/1.1" [2021-04-20:16:58:02:INFO] Determined delimiter of CSV input is ',' [2021-04-20:16:58:02:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:16:58:03 +0000] "POST /invocations HTTP/1.1" 200 9120 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:58:03 +0000] "POST /invocations HTTP/1.1" 200 9120 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:58:04 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:58:04 +0000] "POST /invocations HTTP/1.1" 200 12232 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:58:04 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:16:58:04 +0000] "POST /invocations HTTP/1.1" 200 12232 "-" "Go-http-client/1.1" Text message sent: SMe05fed0bd580468593895cbd1b1d1d02 ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-616964915547/sagemaker-xgboost-2021-04-20-16-51-58-565/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ### 0.73088 ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') twilio_helper.send_text_message(f'{notebook_name}: original model deployed') ### Using already existing model: sagemaker-xgboost-2021-04-19-15-01-40-820 ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: sagemaker-xgboost-2021-04-20-16-36-32-047 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['compel', 'piec', 'low', 'budget', 'horror', 'rel', 'origin', 'premis', 'cast', 'fill', 'familiar', 'face', 'one', 'convinc', 'film', 'locat', 'histori', 'horror', 'film', 'could', 'anyon', 'pleas', 'tell', 'movi', 'utterli', 'underr', 'prison', 'finnish', 'director', 'harlin', 'american', 'debut', 'still', 'count', 'best', 'effort', 'even', 'though', 'went', 'make', 'blockbust', 'hit', 'like', 'die', 'hard', '2', 'cliffhang', 'deep', 'blue', 'sea', 'stori', 'entir', 'take', 'place', 'ancient', 'ramshackl', 'wyom', 'prison', 'open', 'caus', 'popul', 'modern', 'state', 'penitentiari', 'insid', 'former', 'execut', 'dungeon', 'restless', 'spirit', 'electr', 'chair', 'last', 'victim', 'still', 'dwell', 'around', 'promot', 'warden', 'eaton', 'sharp', 'lane', 'smith', 'alreadi', '40', 'year', 'ago', 'innoc', 'man', 'put', 'death', 'spirit', 'still', 'rememb', 'vile', 'role', 'unfair', 'trial', 'seem', 'time', 'vengeanc', 'final', 'arriv', 'viggo', 'mortensen', 'play', 'good', 'car', 'thief', 'prevent', 'even', 'larger', 'bodi', 'count', 'chelsea', 'field', 'human', 'social', 'worker', 'slowli', 'unravel', 'secret', 'past', 'prison', 'contain', 'half', 'dozen', 'memor', 'gore', 'sequenc', 'unbear', 'tens', 'atmospher', 'stick', 'certain', 'unlik', 'horror', 'pictur', 'decad', 'prison', 'featur', 'amaz', 'sens', 'realism', 'refer', 'authent', 'sceneri', 'mood', 'insid', 'prison', 'wall', 'cours', 'toward', 'supernatur', 'murder', 'commit', 'even', 'though', 'genuin', 'unsettl', 'well', 'film', 'best', 'part', 'imag', 'realist', 'tough', 'prison', 'drama', 'sequenc', 'combin', 'visual', 'mayhem', 'shock', 'horror', 'absolut', 'best', 'terror', 'moment', 'provid', 'nightmar', 'ever', 'sinc', 'saw', 'rather', 'young', 'age', 'focus', 'grizzli', 'death', 'struggl', 'involv', 'barb', 'wire', 'haunt', 'screenplay', 'suffer', 'one', 'flaw', 'common', 'one', 'almost', 'inevit', 'guess', 'clich', 'stori', 'introduc', 'nearli', 'everi', 'possibl', 'stereotyp', 'prison', 'surround', 'got', 'ugli', 'fat', 'pervert', 'cute', 'boy', 'toy', 'cowardli', 'racist', 'guard', 'avoid', 'confront', 'cost', 'natur', 'old', 'n', 'wise', 'black', 'con', 'serv', 'lifetim', 'hear', 'anybodi', 'yell', 'name', 'morgan', 'freeman', 'stare', 'blind', 'clich', 'advis', 'mani', 'element', 'admir', 'photographi', 'dark', 'moist', 'mysteri', 'upheld', 'long', 'success', 'support', 'inmat', 'role', 'class', 'b', 'actor', 'excel', 'fan', 'recogn', 'tom', 'everett', 'tom', 'tini', 'lister', 'even', 'immort', 'horror', 'icon', 'kane', 'hodder', 'forget', 'we', 'craven', 'god', 'aw', 'attempt', 'shocker', 'downright', 'pathet', 'chees', 'flick', 'chair', 'prison', 'chiller', 'worth', 'track', 'especi', 'consid', 'viggo', 'mortensen', 'peak', 'popular', 'nowaday', 'heard', 'star', 'success', 'franchis', 'involv', 'elv', 'hobbit', 'fairi', 'creatur', 'true', '80', 'horror', 'gem', 'ought', 'get', 'urgent', 'dvd', 'releas'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer( max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:489: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'21st', 'ghetto', 'reincarn', 'spill', 'weari', 'playboy', 'victorian'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'optimist', 'omin', 'dubiou', 'orchestr', 'sophi', 'banana', 'masterson'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question:** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. ###Code for word in (original_vocabulary - new_vocabulary): print(word, vocabulary[word]) for word in (new_vocabulary - original_vocabulary): print(word, new_vectorizer.vocabulary_[word]) ###Output optimist 3169 omin 3156 dubiou 1426 orchestr 3172 sophi 4144 banana 424 masterson 2803 ###Markdown (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code prefix # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator( container, # The name of the training container role, # The IAM role to use (our current role in this case) train_instance_count=1, # The number of instances to use for training train_instance_type='ml.m4.xlarge', # The type of instance ot use for training output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), # Where to save the output (the model artifacts) sagemaker_session=session) # The current SageMaker session # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters( max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') # use "new_train_location" s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') %%time # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) twilio_helper.send_text_message(f'{notebook_name}: new model training finished') ###Output 2021-04-20 17:08:16 Starting - Starting the training job... 2021-04-20 17:08:17 Starting - Launching requested ML instances...... 2021-04-20 17:09:33 Starting - Preparing the instances for training......... 2021-04-20 17:10:54 Downloading - Downloading input data... 2021-04-20 17:11:18 Training - Downloading the training image... 2021-04-20 17:11:59 Training - Training image download completed. Training in progress.INFO:sagemaker-containers:Imported framework sagemaker_xgboost_container.training INFO:sagemaker-containers:Failed to parse hyperparameter objective value binary:logistic to Json. Returning the value itself INFO:sagemaker-containers:No GPUs detected (normal if no gpus installed) INFO:sagemaker_xgboost_container.training:Running XGBoost Sagemaker in algorithm mode INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' [17:12:05] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, INFO:root:Determined delimiter of CSV input is ',' [17:12:07] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, INFO:root:Single node training. INFO:root:Train matrix has 15000 rows INFO:root:Validation matrix has 10000 rows [17:12:07] WARNING: /workspace/src/learner.cc:328:  Parameters: { early_stopping_rounds, num_round, silent } might not be used. This may not be accurate due to some parameters are only used in language bindings but passed down to XGBoost core. Or some parameters are not used but slip through this verification. Please open an issue if you find above cases.  [0]#011train-error:0.29560#011validation-error:0.31620 [1]#011train-error:0.29287#011validation-error:0.30900 [2]#011train-error:0.29260#011validation-error:0.30960 [3]#011train-error:0.27367#011validation-error:0.29260 [4]#011train-error:0.26827#011validation-error:0.28680 [5]#011train-error:0.25553#011validation-error:0.27300 [6]#011train-error:0.25407#011validation-error:0.26840 [7]#011train-error:0.23700#011validation-error:0.25370 [8]#011train-error:0.23113#011validation-error:0.24830 [9]#011train-error:0.22747#011validation-error:0.24640 [10]#011train-error:0.22220#011validation-error:0.24300 [11]#011train-error:0.21813#011validation-error:0.24170 [12]#011train-error:0.21647#011validation-error:0.23810 [13]#011train-error:0.21473#011validation-error:0.23680 [14]#011train-error:0.21067#011validation-error:0.23390 [15]#011train-error:0.20553#011validation-error:0.23010 [16]#011train-error:0.20193#011validation-error:0.22500 [17]#011train-error:0.19767#011validation-error:0.22300 [18]#011train-error:0.19513#011validation-error:0.22090 [19]#011train-error:0.19200#011validation-error:0.21960 [20]#011train-error:0.18893#011validation-error:0.21860 [21]#011train-error:0.18580#011validation-error:0.21510 [22]#011train-error:0.18380#011validation-error:0.21390 [23]#011train-error:0.18180#011validation-error:0.21200 [24]#011train-error:0.18013#011validation-error:0.21100 [25]#011train-error:0.17980#011validation-error:0.21000 [26]#011train-error:0.17873#011validation-error:0.21020 [27]#011train-error:0.17673#011validation-error:0.20890 [28]#011train-error:0.17387#011validation-error:0.20790 [29]#011train-error:0.17260#011validation-error:0.20630 [30]#011train-error:0.17020#011validation-error:0.20690 [31]#011train-error:0.16820#011validation-error:0.20330 [32]#011train-error:0.16700#011validation-error:0.20280 [33]#011train-error:0.16740#011validation-error:0.20210 [34]#011train-error:0.16607#011validation-error:0.20050 [35]#011train-error:0.16413#011validation-error:0.19870 [36]#011train-error:0.16233#011validation-error:0.19790 [37]#011train-error:0.15947#011validation-error:0.19590 [38]#011train-error:0.15860#011validation-error:0.19560 [39]#011train-error:0.15827#011validation-error:0.19640 [40]#011train-error:0.15660#011validation-error:0.19480 [41]#011train-error:0.15467#011validation-error:0.19450 [42]#011train-error:0.15453#011validation-error:0.19550 [43]#011train-error:0.15313#011validation-error:0.19360 [44]#011train-error:0.15233#011validation-error:0.19380 [45]#011train-error:0.15173#011validation-error:0.19380 [46]#011train-error:0.15027#011validation-error:0.19300 [47]#011train-error:0.14973#011validation-error:0.19220 [48]#011train-error:0.14813#011validation-error:0.19230 [49]#011train-error:0.14760#011validation-error:0.19200 [50]#011train-error:0.14653#011validation-error:0.18990 [51]#011train-error:0.14660#011validation-error:0.19070 [52]#011train-error:0.14620#011validation-error:0.19020 [53]#011train-error:0.14580#011validation-error:0.19100 [54]#011train-error:0.14500#011validation-error:0.19020 [55]#011train-error:0.14540#011validation-error:0.18890 [56]#011train-error:0.14467#011validation-error:0.18900 [57]#011train-error:0.14320#011validation-error:0.18820 [58]#011train-error:0.14200#011validation-error:0.18770 [59]#011train-error:0.14120#011validation-error:0.18850 [60]#011train-error:0.14073#011validation-error:0.18820 [61]#011train-error:0.14100#011validation-error:0.18880 [62]#011train-error:0.14067#011validation-error:0.18720 [63]#011train-error:0.13933#011validation-error:0.18750 [64]#011train-error:0.13927#011validation-error:0.18630 [65]#011train-error:0.13600#011validation-error:0.18560 [66]#011train-error:0.13567#011validation-error:0.18520 [67]#011train-error:0.13540#011validation-error:0.18460 [68]#011train-error:0.13520#011validation-error:0.18480 [69]#011train-error:0.13413#011validation-error:0.18450 [70]#011train-error:0.13400#011validation-error:0.18440 [71]#011train-error:0.13427#011validation-error:0.18480 [72]#011train-error:0.13400#011validation-error:0.18500 [73]#011train-error:0.13333#011validation-error:0.18490 [74]#011train-error:0.13307#011validation-error:0.18550 [75]#011train-error:0.13273#011validation-error:0.18520 [76]#011train-error:0.13193#011validation-error:0.18410 [77]#011train-error:0.13147#011validation-error:0.18470 [78]#011train-error:0.13087#011validation-error:0.18430 [79]#011train-error:0.12987#011validation-error:0.18340 [80]#011train-error:0.13000#011validation-error:0.18320 [81]#011train-error:0.12993#011validation-error:0.18290 [82]#011train-error:0.12940#011validation-error:0.18250 [83]#011train-error:0.12873#011validation-error:0.18200 [84]#011train-error:0.12847#011validation-error:0.18260 [85]#011train-error:0.12773#011validation-error:0.18320 [86]#011train-error:0.12753#011validation-error:0.18210 [87]#011train-error:0.12753#011validation-error:0.18120 [88]#011train-error:0.12680#011validation-error:0.18070 [89]#011train-error:0.12600#011validation-error:0.18090 [90]#011train-error:0.12440#011validation-error:0.18140 [91]#011train-error:0.12413#011validation-error:0.18150 [92]#011train-error:0.12360#011validation-error:0.18090 [93]#011train-error:0.12273#011validation-error:0.18040 [94]#011train-error:0.12287#011validation-error:0.18120 [95]#011train-error:0.12273#011validation-error:0.18150 [96]#011train-error:0.12313#011validation-error:0.18090 [97]#011train-error:0.12287#011validation-error:0.18080 [98]#011train-error:0.12220#011validation-error:0.18080 [99]#011train-error:0.12207#011validation-error:0.18050 [100]#011train-error:0.12167#011validation-error:0.18030 [101]#011train-error:0.12160#011validation-error:0.17930 [102]#011train-error:0.12100#011validation-error:0.17930 [103]#011train-error:0.12067#011validation-error:0.17970 [104]#011train-error:0.12053#011validation-error:0.17970 [105]#011train-error:0.12033#011validation-error:0.17970 [106]#011train-error:0.12047#011validation-error:0.17960 [107]#011train-error:0.11993#011validation-error:0.17940 [108]#011train-error:0.11933#011validation-error:0.17920 [109]#011train-error:0.11900#011validation-error:0.17930 [110]#011train-error:0.11887#011validation-error:0.17910 [111]#011train-error:0.11747#011validation-error:0.17880 [112]#011train-error:0.11720#011validation-error:0.17870 [113]#011train-error:0.11693#011validation-error:0.17880 [114]#011train-error:0.11653#011validation-error:0.17860 [115]#011train-error:0.11627#011validation-error:0.17750 [116]#011train-error:0.11573#011validation-error:0.17790 [117]#011train-error:0.11587#011validation-error:0.17770 [118]#011train-error:0.11487#011validation-error:0.17810 [119]#011train-error:0.11460#011validation-error:0.17850 [120]#011train-error:0.11473#011validation-error:0.17790 [121]#011train-error:0.11453#011validation-error:0.17780 [122]#011train-error:0.11400#011validation-error:0.17760 [123]#011train-error:0.11413#011validation-error:0.17720 [124]#011train-error:0.11393#011validation-error:0.17770 [125]#011train-error:0.11353#011validation-error:0.17790 2021-04-20 17:14:47 Uploading - Uploading generated training model[126]#011train-error:0.11393#011validation-error:0.17720 [127]#011train-error:0.11393#011validation-error:0.17790 [128]#011train-error:0.11373#011validation-error:0.17820 [129]#011train-error:0.11300#011validation-error:0.17780 [130]#011train-error:0.11340#011validation-error:0.17780 [131]#011train-error:0.11267#011validation-error:0.17750 [132]#011train-error:0.11260#011validation-error:0.17780 [133]#011train-error:0.11140#011validation-error:0.17770 2021-04-20 17:14:54 Completed - Training job completed Training seconds: 240 Billable seconds: 240 Text message sent: SM87fc1821340a4be9900b8376a58b81e5 CPU times: user 1.04 s, sys: 63.2 ms, total: 1.1 s Wall time: 7min 15s ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of **a textbook example of leakage**. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code %%time # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() twilio_helper.send_text_message(f'{notebook_name}: new model testing finished') ###Output .................................[2021-04-20:17:20:53:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:17:20:53:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:17:20:53:INFO] nginx config:  worker_processes auto; daemon off; pid /tmp/nginx.pid; error_log /dev/stderr;  worker_rlimit_nofile 4096;  events { worker_connections 2048; }  http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /dev/stdout combined; upstream gunicorn { server unix:/tmp/gunicorn.sock; } server { listen 8080 deferred; client_max_body_size 0; keepalive_timeout 3; location ~ ^/(ping|invocations|execution-parameters) { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 60s; proxy_pass http://gunicorn; } location / { return 404 "{}"; } } }  [2021-04-20 17:20:53 +0000] [17] [INFO] Starting gunicorn 19.10.0 [2021-04-20 17:20:53 +0000] [17] [INFO] Listening at: unix:/tmp/gunicorn.sock (17) [2021-04-20 17:20:53 +0000] [17] [INFO] Using worker: gevent [2021-04-20 17:20:53 +0000] [24] [INFO] Booting worker with pid: 24 [2021-04-20 17:20:53 +0000] [25] [INFO] Booting worker with pid: 25 [2021-04-20 17:20:53 +0000] [26] [INFO] Booting worker with pid: 26 [2021-04-20 17:20:53 +0000] [27] [INFO] Booting worker with pid: 27 [2021-04-20:17:20:59:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [20/Apr/2021:17:20:59 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" [2021-04-20:17:20:59:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [20/Apr/2021:17:20:59 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" 2021-04-20T17:20:59.687:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2021-04-20:17:21:02:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:17:21:02:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:02:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:17:21:02:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:02:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:02:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:02:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:17:21:02:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:02:INFO] No GPUs detected (normal if no gpus installed) [2021-04-20:17:21:02:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:02:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:02:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:05 +0000] "POST /invocations HTTP/1.1" 200 12124 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:05 +0000] "POST /invocations HTTP/1.1" 200 12142 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:05 +0000] "POST /invocations HTTP/1.1" 200 12131 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:06 +0000] "POST /invocations HTTP/1.1" 200 12116 "-" "Go-http-client/1.1" [2021-04-20:17:21:06:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:06:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:05 +0000] "POST /invocations HTTP/1.1" 200 12124 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:05 +0000] "POST /invocations HTTP/1.1" 200 12142 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:05 +0000] "POST /invocations HTTP/1.1" 200 12131 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:06 +0000] "POST /invocations HTTP/1.1" 200 12116 "-" "Go-http-client/1.1" [2021-04-20:17:21:06:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:06:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:06:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:06:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:06:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:06:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:09 +0000] "POST /invocations HTTP/1.1" 200 12136 "-" "Go-http-client/1.1" [2021-04-20:17:21:09:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:09 +0000] "POST /invocations HTTP/1.1" 200 12118 "-" "Go-http-client/1.1" [2021-04-20:17:21:09:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:09:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:09:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:09 +0000] "POST /invocations HTTP/1.1" 200 12136 "-" "Go-http-client/1.1" [2021-04-20:17:21:09:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:09 +0000] "POST /invocations HTTP/1.1" 200 12118 "-" "Go-http-client/1.1" [2021-04-20:17:21:09:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:09:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:09:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:12 +0000] "POST /invocations HTTP/1.1" 200 12135 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:12 +0000] "POST /invocations HTTP/1.1" 200 12130 "-" "Go-http-client/1.1" [2021-04-20:17:21:12:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:12 +0000] "POST /invocations HTTP/1.1" 200 12114 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:12 +0000] "POST /invocations HTTP/1.1" 200 12135 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:12 +0000] "POST /invocations HTTP/1.1" 200 12130 "-" "Go-http-client/1.1" [2021-04-20:17:21:12:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:12 +0000] "POST /invocations HTTP/1.1" 200 12114 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:12 +0000] "POST /invocations HTTP/1.1" 200 12143 "-" "Go-http-client/1.1" [2021-04-20:17:21:12:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:12:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:12:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:12 +0000] "POST /invocations HTTP/1.1" 200 12143 "-" "Go-http-client/1.1" [2021-04-20:17:21:12:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:12:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:12:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:15 +0000] "POST /invocations HTTP/1.1" 200 12122 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:15 +0000] "POST /invocations HTTP/1.1" 200 12126 "-" "Go-http-client/1.1" [2021-04-20:17:21:15:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:15 +0000] "POST /invocations HTTP/1.1" 200 12118 "-" "Go-http-client/1.1" [2021-04-20:17:21:15:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:15 +0000] "POST /invocations HTTP/1.1" 200 12141 "-" "Go-http-client/1.1" [2021-04-20:17:21:15:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:15:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:15 +0000] "POST /invocations HTTP/1.1" 200 12122 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:15 +0000] "POST /invocations HTTP/1.1" 200 12126 "-" "Go-http-client/1.1" [2021-04-20:17:21:15:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:15 +0000] "POST /invocations HTTP/1.1" 200 12118 "-" "Go-http-client/1.1" [2021-04-20:17:21:15:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:15 +0000] "POST /invocations HTTP/1.1" 200 12141 "-" "Go-http-client/1.1" [2021-04-20:17:21:15:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:15:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:18 +0000] "POST /invocations HTTP/1.1" 200 12144 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:18 +0000] "POST /invocations HTTP/1.1" 200 12143 "-" "Go-http-client/1.1" [2021-04-20:17:21:18:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:18 +0000] "POST /invocations HTTP/1.1" 200 12136 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:18 +0000] "POST /invocations HTTP/1.1" 200 12144 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:18 +0000] "POST /invocations HTTP/1.1" 200 12143 "-" "Go-http-client/1.1" [2021-04-20:17:21:18:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:18 +0000] "POST /invocations HTTP/1.1" 200 12136 "-" "Go-http-client/1.1" [2021-04-20:17:21:18:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:18 +0000] "POST /invocations HTTP/1.1" 200 12133 "-" "Go-http-client/1.1" [2021-04-20:17:21:18:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:19:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:18:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:18 +0000] "POST /invocations HTTP/1.1" 200 12133 "-" "Go-http-client/1.1" [2021-04-20:17:21:18:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:19:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:22:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:22 +0000] "POST /invocations HTTP/1.1" 200 12122 "-" "Go-http-client/1.1" [2021-04-20:17:21:22:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:22:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:22 +0000] "POST /invocations HTTP/1.1" 200 12122 "-" "Go-http-client/1.1" [2021-04-20:17:21:22:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:22:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:22:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:24 +0000] "POST /invocations HTTP/1.1" 200 12162 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:24 +0000] "POST /invocations HTTP/1.1" 200 12124 "-" "Go-http-client/1.1" [2021-04-20:17:21:25:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:24 +0000] "POST /invocations HTTP/1.1" 200 12162 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:24 +0000] "POST /invocations HTTP/1.1" 200 12124 "-" "Go-http-client/1.1" [2021-04-20:17:21:25:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:25 +0000] "POST /invocations HTTP/1.1" 200 12134 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:25 +0000] "POST /invocations HTTP/1.1" 200 12134 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:25 +0000] "POST /invocations HTTP/1.1" 200 12131 "-" "Go-http-client/1.1" [2021-04-20:17:21:25:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:25:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:25:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:25 +0000] "POST /invocations HTTP/1.1" 200 12131 "-" "Go-http-client/1.1" [2021-04-20:17:21:25:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:25:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:25:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:28 +0000] "POST /invocations HTTP/1.1" 200 12143 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:28 +0000] "POST /invocations HTTP/1.1" 200 12143 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:28 +0000] "POST /invocations HTTP/1.1" 200 12122 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:28 +0000] "POST /invocations HTTP/1.1" 200 12122 "-" "Go-http-client/1.1" [2021-04-20:17:21:28:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:28 +0000] "POST /invocations HTTP/1.1" 200 12124 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:28 +0000] "POST /invocations HTTP/1.1" 200 12131 "-" "Go-http-client/1.1" [2021-04-20:17:21:28:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:28:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:28:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:28:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:28 +0000] "POST /invocations HTTP/1.1" 200 12124 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:28 +0000] "POST /invocations HTTP/1.1" 200 12131 "-" "Go-http-client/1.1" [2021-04-20:17:21:28:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:28:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:28:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:31 +0000] "POST /invocations HTTP/1.1" 200 12106 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:31 +0000] "POST /invocations HTTP/1.1" 200 12140 "-" "Go-http-client/1.1" [2021-04-20:17:21:31:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:31 +0000] "POST /invocations HTTP/1.1" 200 12125 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:31 +0000] "POST /invocations HTTP/1.1" 200 12141 "-" "Go-http-client/1.1" [2021-04-20:17:21:31:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:31:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:31:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:31 +0000] "POST /invocations HTTP/1.1" 200 12106 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:31 +0000] "POST /invocations HTTP/1.1" 200 12140 "-" "Go-http-client/1.1" [2021-04-20:17:21:31:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [20/Apr/2021:17:21:31 +0000] "POST /invocations HTTP/1.1" 200 12125 "-" "Go-http-client/1.1" 169.254.255.130 - - [20/Apr/2021:17:21:31 +0000] "POST /invocations HTTP/1.1" 200 12141 "-" "Go-http-client/1.1" [2021-04-20:17:21:31:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:31:INFO] Determined delimiter of CSV input is ',' [2021-04-20:17:21:31:INFO] Determined delimiter of CSV input is ',' Text message sent: SMdae23fd5a7524fb89e0d6b66cb5039e8 CPU times: user 975 ms, sys: 70.6 ms, total: 1.05 s Wall time: 6min 45s ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-616964915547/sagemaker-xgboost-2021-04-20-17-15-31-610/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ### 0.8538 ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, **if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.**To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code ### this is kind of similar to "regression test" in software development cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() twilio_helper.send_text_message(f'{notebook_name}: check data format now!') print(len(test_X[0])) # should be 5000 assert len(test_X[0])==5000 ###Output Text message sent: SM9723d01e8ec54a56a23b3b4000ba2ec3 5000 ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_X = None test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) print(test_location) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() twilio_helper.send_text_message(f'{notebook_name}: new model testing with the original test data finished') new_xgb_transformer.output_path # e.g. 's3://sagemaker-us-east-1-616964915547/sagemaker-xgboost-2021-04-20-20-05-57-322' !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code ### old model name xgb_transformer.model_name ### new model name new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = \ "sentiment-update-xgboost-endpoint-config-" + \ strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint( EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir ### Do NOT execute this cell if you still need the data ### in the cache for this or some other mini projects! # # Similarly we will remove the files in the cache_dir directory and the directory itself # !rm $cache_dir/* # !rmdir $cache_dir twilio_helper.send_text_message(f'{notebook_name}: running all cells finished') ###Output Text message sent: SM8fb618c11f4c407584082401dc7024c7 ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output _____no_output_____ ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output _____no_output_____ ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output _____no_output_____ ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. ###Output _____no_output_____ ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None ###Output _____no_output_____ ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output _____no_output_____ ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output _____no_output_____ ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output _____no_output_____ ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. ###Code # Make sure that we use SageMaker 1.x !pip install sagemaker==1.72.0 ###Output Collecting sagemaker==1.72.0 Downloading sagemaker-1.72.0.tar.gz (297 kB)  |████████████████████████████████| 297 kB 17.1 MB/s eta 0:00:01 [?25hRequirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.16.37) Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.4) Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.14.0) Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1) Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5) Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.1.0) Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.7) Requirement already satisfied: botocore<1.20.0,>=1.19.37 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.37) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0) Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.3) Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.37->boto3>=1.14.12->sagemaker==1.72.0) (1.25.11) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.37->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0) Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0) Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7) Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0) Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.14.0) Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0) Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0) Requirement already satisfied: botocore<1.20.0,>=1.19.37 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.37) Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.4) Collecting smdebug-rulesconfig==0.1.4 Downloading smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB) Building wheels for collected packages: sagemaker Building wheel for sagemaker (setup.py) ... [?25ldone [?25h Created wheel for sagemaker: filename=sagemaker-1.72.0-py2.py3-none-any.whl size=386358 sha256=055c3fbb3c7f514eb9cdb0883c2951865176682a2ad2cfbf2e097b623469c179 Stored in directory: /home/ec2-user/.cache/pip/wheels/c3/58/70/85faf4437568bfaa4c419937569ba1fe54d44c5db42406bbd7 Successfully built sagemaker Installing collected packages: smdebug-rulesconfig, sagemaker Attempting uninstall: smdebug-rulesconfig Found existing installation: smdebug-rulesconfig 1.0.0 Uninstalling smdebug-rulesconfig-1.0.0: Successfully uninstalled smdebug-rulesconfig-1.0.0 Attempting uninstall: sagemaker Found existing installation: sagemaker 2.19.0 Uninstalling sagemaker-2.19.0: Successfully uninstalled sagemaker-2.19.0 Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4 WARNING: You are using pip version 20.3; however, version 20.3.3 is available. You should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python -m pip install --upgrade pip' command. ###Markdown Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2021-01-23 00:05:53 Starting - Starting the training job... 2021-01-23 00:05:56 Starting - Launching requested ML instances...... 2021-01-23 00:07:10 Starting - Preparing the instances for training...... 2021-01-23 00:08:19 Downloading - Downloading input data 2021-01-23 00:08:19 Training - Downloading the training image... 2021-01-23 00:08:45 Training - Training image download completed. Training in progress.Arguments: train [2021-01-23:00:08:46:INFO] Running standalone xgboost training. [2021-01-23:00:08:46:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8414.32mb [2021-01-23:00:08:46:INFO] Determined delimiter of CSV input is ',' [00:08:46] S3DistributionType set as FullyReplicated [00:08:48] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2021-01-23:00:08:48:INFO] Determined delimiter of CSV input is ',' [00:08:48] S3DistributionType set as FullyReplicated [00:08:49] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [00:08:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5 [0]#011train-error:0.2948#011validation-error:0.3057 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [00:08:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.279#011validation-error:0.2856 [00:08:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.2728#011validation-error:0.2804 [00:08:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 0 pruned nodes, max_depth=5 [3]#011train-error:0.269533#011validation-error:0.2773 [00:08:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.250467#011validation-error:0.2633 [00:09:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [5]#011train-error:0.248867#011validation-error:0.2617 [00:09:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.246133#011validation-error:0.26 [00:09:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [7]#011train-error:0.240267#011validation-error:0.2534 [00:09:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.2344#011validation-error:0.2492 [00:09:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [9]#011train-error:0.223867#011validation-error:0.2409 [00:09:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [10]#011train-error:0.221067#011validation-error:0.2394 [00:09:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.2198#011validation-error:0.2399 [00:09:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [12]#011train-error:0.215467#011validation-error:0.236 [00:09:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 0 pruned nodes, max_depth=5 [13]#011train-error:0.209067#011validation-error:0.2296 [00:09:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.203467#011validation-error:0.2257 [00:09:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5 [15]#011train-error:0.200267#011validation-error:0.2213 [00:09:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [16]#011train-error:0.1988#011validation-error:0.2182 [00:09:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [17]#011train-error:0.1968#011validation-error:0.2173 [00:09:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [18]#011train-error:0.1918#011validation-error:0.2134 [00:09:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [19]#011train-error:0.19#011validation-error:0.2131 [00:09:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [20]#011train-error:0.185667#011validation-error:0.2093 [00:09:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.183533#011validation-error:0.2069 [00:09:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.182467#011validation-error:0.2041 [00:09:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.179267#011validation-error:0.2012 [00:09:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [24]#011train-error:0.176933#011validation-error:0.197 [00:09:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.1746#011validation-error:0.1969 [00:09:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [26]#011train-error:0.173#011validation-error:0.1956 [00:09:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [27]#011train-error:0.171067#011validation-error:0.1947 [00:09:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [28]#011train-error:0.170133#011validation-error:0.1938 [00:09:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [29]#011train-error:0.1676#011validation-error:0.1923 [00:09:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [30]#011train-error:0.168133#011validation-error:0.1917 [00:09:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [31]#011train-error:0.1644#011validation-error:0.1882 [00:09:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5 [32]#011train-error:0.164467#011validation-error:0.186 [00:09:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [33]#011train-error:0.1624#011validation-error:0.1836 [00:09:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [34]#011train-error:0.160667#011validation-error:0.1828 [00:09:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.158133#011validation-error:0.1809 [00:09:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.157533#011validation-error:0.1795 [00:09:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [37]#011train-error:0.156467#011validation-error:0.1777 [00:09:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5 [38]#011train-error:0.155533#011validation-error:0.1779 [00:09:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [39]#011train-error:0.154467#011validation-error:0.1771 [00:09:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [40]#011train-error:0.1542#011validation-error:0.1763 [00:09:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [41]#011train-error:0.153#011validation-error:0.174 [00:09:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [42]#011train-error:0.150533#011validation-error:0.1751 [00:09:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [43]#011train-error:0.148867#011validation-error:0.1747 [00:09:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [44]#011train-error:0.147867#011validation-error:0.173 [00:09:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [45]#011train-error:0.1474#011validation-error:0.1722 [00:09:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [46]#011train-error:0.145467#011validation-error:0.1731 [00:09:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [47]#011train-error:0.144133#011validation-error:0.173 [00:09:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [48]#011train-error:0.1442#011validation-error:0.1731 [00:09:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [49]#011train-error:0.143#011validation-error:0.1715 [00:09:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 16 pruned nodes, max_depth=5 [50]#011train-error:0.141133#011validation-error:0.171 [00:10:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5 [51]#011train-error:0.140467#011validation-error:0.1696 [00:10:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [52]#011train-error:0.139467#011validation-error:0.169 [00:10:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [53]#011train-error:0.1392#011validation-error:0.1677 [00:10:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [54]#011train-error:0.138933#011validation-error:0.1675 [00:10:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [55]#011train-error:0.1382#011validation-error:0.167 [00:10:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [56]#011train-error:0.136667#011validation-error:0.1673 [00:10:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [57]#011train-error:0.135533#011validation-error:0.1667 [00:10:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5 [58]#011train-error:0.135733#011validation-error:0.1655 [00:10:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [59]#011train-error:0.134867#011validation-error:0.1651 [00:10:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5 [60]#011train-error:0.134533#011validation-error:0.1643 [00:10:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [61]#011train-error:0.134467#011validation-error:0.1644 [00:10:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [62]#011train-error:0.133#011validation-error:0.1627 [00:10:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [63]#011train-error:0.133133#011validation-error:0.1618 [00:10:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [64]#011train-error:0.132067#011validation-error:0.1613 [00:10:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5 [65]#011train-error:0.131067#011validation-error:0.1612 [00:10:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5 [66]#011train-error:0.129533#011validation-error:0.161 [00:10:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5 [67]#011train-error:0.1292#011validation-error:0.1599 [00:10:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [68]#011train-error:0.1282#011validation-error:0.1589 [00:10:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [69]#011train-error:0.127533#011validation-error:0.1577 [00:10:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [70]#011train-error:0.1272#011validation-error:0.1566 [00:10:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [71]#011train-error:0.126933#011validation-error:0.1577 [00:10:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5 [72]#011train-error:0.1268#011validation-error:0.1576 [00:10:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [73]#011train-error:0.1254#011validation-error:0.1567 [00:10:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 18 pruned nodes, max_depth=5 [74]#011train-error:0.1236#011validation-error:0.1564 [00:10:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [75]#011train-error:0.123533#011validation-error:0.1569 [00:10:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [76]#011train-error:0.1224#011validation-error:0.1569 [00:10:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [77]#011train-error:0.121133#011validation-error:0.1564 [00:10:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [78]#011train-error:0.121133#011validation-error:0.1551 [00:10:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [79]#011train-error:0.120667#011validation-error:0.154 [00:10:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [80]#011train-error:0.120467#011validation-error:0.154 [00:10:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [81]#011train-error:0.119267#011validation-error:0.1551 [00:10:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5 [82]#011train-error:0.118933#011validation-error:0.1551 [00:10:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [83]#011train-error:0.1182#011validation-error:0.1547 [00:10:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5 [84]#011train-error:0.1182#011validation-error:0.1541 [00:10:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5 [85]#011train-error:0.1178#011validation-error:0.1535 [00:10:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [86]#011train-error:0.1164#011validation-error:0.1537 [00:10:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5 [87]#011train-error:0.1164#011validation-error:0.1536 [00:10:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [88]#011train-error:0.115867#011validation-error:0.1538 [00:10:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [89]#011train-error:0.114533#011validation-error:0.1528 [00:10:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [90]#011train-error:0.113333#011validation-error:0.1525 [00:10:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 16 pruned nodes, max_depth=5 [91]#011train-error:0.112#011validation-error:0.1522 [00:10:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [92]#011train-error:0.1122#011validation-error:0.1517 [00:10:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [93]#011train-error:0.111867#011validation-error:0.1517 [00:10:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [94]#011train-error:0.111733#011validation-error:0.1515 [00:10:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [95]#011train-error:0.110933#011validation-error:0.1511 [00:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [96]#011train-error:0.110533#011validation-error:0.1519 [00:11:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5 [97]#011train-error:0.1094#011validation-error:0.151 [00:11:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [98]#011train-error:0.109333#011validation-error:0.1505 [00:11:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5 [99]#011train-error:0.108933#011validation-error:0.1505 [00:11:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [100]#011train-error:0.1084#011validation-error:0.1506 [00:11:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [101]#011train-error:0.108333#011validation-error:0.1503 [00:11:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [102]#011train-error:0.1084#011validation-error:0.1491 [00:11:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [103]#011train-error:0.108#011validation-error:0.1474 [00:11:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [104]#011train-error:0.1074#011validation-error:0.1469 [00:11:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [105]#011train-error:0.1066#011validation-error:0.1477 [00:11:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5 [106]#011train-error:0.107#011validation-error:0.1476 [00:11:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [107]#011train-error:0.106333#011validation-error:0.1469 [00:11:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [108]#011train-error:0.105267#011validation-error:0.1466 [00:11:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 22 pruned nodes, max_depth=5 [109]#011train-error:0.105133#011validation-error:0.1465 [00:11:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [110]#011train-error:0.104667#011validation-error:0.1464 [00:11:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [111]#011train-error:0.103933#011validation-error:0.1467 [00:11:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [112]#011train-error:0.104533#011validation-error:0.1471 [00:11:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=5 [113]#011train-error:0.104133#011validation-error:0.1465 [00:11:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [114]#011train-error:0.103667#011validation-error:0.1465 [00:11:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5 [115]#011train-error:0.102933#011validation-error:0.1467 [00:11:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5 [116]#011train-error:0.1022#011validation-error:0.1461 [00:11:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [117]#011train-error:0.1016#011validation-error:0.145 [00:11:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [118]#011train-error:0.100933#011validation-error:0.146 [00:11:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [119]#011train-error:0.101267#011validation-error:0.1457 [00:11:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [120]#011train-error:0.1014#011validation-error:0.1449 [00:11:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5 [121]#011train-error:0.100733#011validation-error:0.1453 [00:11:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5 [122]#011train-error:0.1012#011validation-error:0.1454 [00:11:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [123]#011train-error:0.100533#011validation-error:0.1457 [00:11:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [124]#011train-error:0.0998#011validation-error:0.145 [00:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5 [125]#011train-error:0.099733#011validation-error:0.1448 [00:11:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [126]#011train-error:0.0998#011validation-error:0.1451 [00:11:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5 [127]#011train-error:0.099333#011validation-error:0.1446 [00:11:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [128]#011train-error:0.0996#011validation-error:0.1445 [00:11:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [129]#011train-error:0.099267#011validation-error:0.1443 [00:11:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=5 [130]#011train-error:0.099267#011validation-error:0.1439 [00:11:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [131]#011train-error:0.099#011validation-error:0.144 [00:11:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5 [132]#011train-error:0.098933#011validation-error:0.1435 [00:11:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5 [133]#011train-error:0.099333#011validation-error:0.1428 [00:11:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5 [134]#011train-error:0.098467#011validation-error:0.1439 [00:11:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [135]#011train-error:0.098333#011validation-error:0.1441 [00:11:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5 [136]#011train-error:0.098733#011validation-error:0.1442 [00:11:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [137]#011train-error:0.097933#011validation-error:0.1435 [00:11:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5 [138]#011train-error:0.098333#011validation-error:0.1434 [00:11:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [139]#011train-error:0.096533#011validation-error:0.1417 [00:11:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [140]#011train-error:0.096067#011validation-error:0.1413 [00:11:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=5 [141]#011train-error:0.095933#011validation-error:0.1414 [00:11:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [142]#011train-error:0.0956#011validation-error:0.1424 [00:12:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5 [143]#011train-error:0.095133#011validation-error:0.1413 [00:12:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5 [144]#011train-error:0.095#011validation-error:0.1412 [00:12:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5 [145]#011train-error:0.0944#011validation-error:0.1407 [00:12:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 12 pruned nodes, max_depth=5 [146]#011train-error:0.094267#011validation-error:0.1401 [00:12:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5 [147]#011train-error:0.094133#011validation-error:0.14 [00:12:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [148]#011train-error:0.0936#011validation-error:0.1395 [00:12:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [149]#011train-error:0.0932#011validation-error:0.1392 [00:12:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 18 pruned nodes, max_depth=5 [150]#011train-error:0.092733#011validation-error:0.1396 [00:12:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [151]#011train-error:0.091867#011validation-error:0.1401 [00:12:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [152]#011train-error:0.091867#011validation-error:0.139 [00:12:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [153]#011train-error:0.0916#011validation-error:0.1384 [00:12:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5 [154]#011train-error:0.091#011validation-error:0.1386 [00:12:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [155]#011train-error:0.091133#011validation-error:0.1387 2021-01-23 00:12:30 Uploading - Uploading generated training model[00:12:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [156]#011train-error:0.091#011validation-error:0.1387 [00:12:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [157]#011train-error:0.090867#011validation-error:0.139 [00:12:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [158]#011train-error:0.0908#011validation-error:0.1394 [00:12:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [159]#011train-error:0.0902#011validation-error:0.1393 [00:12:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5 [160]#011train-error:0.0902#011validation-error:0.1395 [00:12:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5 [161]#011train-error:0.089867#011validation-error:0.1393 [00:12:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 2 pruned nodes, max_depth=5 [162]#011train-error:0.0892#011validation-error:0.1388 [00:12:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 0 pruned nodes, max_depth=5 [163]#011train-error:0.088467#011validation-error:0.1392 Stopping. Best iteration: [153]#011train-error:0.0916#011validation-error:0.1384  2021-01-23 00:12:37 Completed - Training job completed Training seconds: 273 Billable seconds: 273 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ...............................2021-01-23T00:19:10.073:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve Arguments: serve [2021-01-23 00:19:09 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-01-23 00:19:09 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-01-23 00:19:09 +0000] [1] [INFO] Using worker: gevent [2021-01-23 00:19:09 +0000] [36] [INFO] Booting worker with pid: 36 [2021-01-23 00:19:09 +0000] [37] [INFO] Booting worker with pid: 37 [2021-01-23 00:19:10 +0000] [38] [INFO] Booting worker with pid: 38 [2021-01-23 00:19:10 +0000] [39] [INFO] Booting worker with pid: 39 [2021-01-23:00:19:10:INFO] Model loaded successfully for worker : 36 [2021-01-23:00:19:10:INFO] Model loaded successfully for worker : 37 [2021-01-23:00:19:10:INFO] Model loaded successfully for worker : 38 [2021-01-23:00:19:10:INFO] Model loaded successfully for worker : 39 [2021-01-23:00:19:10:INFO] Sniff delimiter as ',' [2021-01-23:00:19:10:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:10:INFO] Sniff delimiter as ',' [2021-01-23:00:19:10:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:10:INFO] Sniff delimiter as ',' [2021-01-23:00:19:10:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:10:INFO] Sniff delimiter as ',' [2021-01-23:00:19:10:INFO] Determined delimiter of CSV input is ',' [2021-01-23 00:19:09 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-01-23 00:19:09 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-01-23 00:19:09 +0000] [1] [INFO] Using worker: gevent [2021-01-23 00:19:09 +0000] [36] [INFO] Booting worker with pid: 36 [2021-01-23 00:19:09 +0000] [37] [INFO] Booting worker with pid: 37 [2021-01-23 00:19:10 +0000] [38] [INFO] Booting worker with pid: 38 [2021-01-23 00:19:10 +0000] [39] [INFO] Booting worker with pid: 39 [2021-01-23:00:19:10:INFO] Model loaded successfully for worker : 36 [2021-01-23:00:19:10:INFO] Model loaded successfully for worker : 37 [2021-01-23:00:19:10:INFO] Model loaded successfully for worker : 38 [2021-01-23:00:19:10:INFO] Model loaded successfully for worker : 39 [2021-01-23:00:19:10:INFO] Sniff delimiter as ',' [2021-01-23:00:19:10:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:10:INFO] Sniff delimiter as ',' [2021-01-23:00:19:10:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:10:INFO] Sniff delimiter as ',' [2021-01-23:00:19:10:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:10:INFO] Sniff delimiter as ',' [2021-01-23:00:19:10:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:12:INFO] Sniff delimiter as ',' [2021-01-23:00:19:12:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:12:INFO] Sniff delimiter as ',' [2021-01-23:00:19:12:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:13:INFO] Sniff delimiter as ',' [2021-01-23:00:19:13:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:13:INFO] Sniff delimiter as ',' [2021-01-23:00:19:13:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:13:INFO] Sniff delimiter as ',' [2021-01-23:00:19:13:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:13:INFO] Sniff delimiter as ',' [2021-01-23:00:19:13:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:13:INFO] Sniff delimiter as ',' [2021-01-23:00:19:13:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:13:INFO] Sniff delimiter as ',' [2021-01-23:00:19:13:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:15:INFO] Sniff delimiter as ',' [2021-01-23:00:19:15:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:15:INFO] Sniff delimiter as ',' [2021-01-23:00:19:15:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:15:INFO] Sniff delimiter as ',' [2021-01-23:00:19:15:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:15:INFO] Sniff delimiter as ',' [2021-01-23:00:19:15:INFO] Sniff delimiter as ',' [2021-01-23:00:19:15:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:15:INFO] Sniff delimiter as ',' [2021-01-23:00:19:15:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:15:INFO] Sniff delimiter as ',' [2021-01-23:00:19:15:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:15:INFO] Sniff delimiter as ',' [2021-01-23:00:19:15:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:15:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:17:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:17:INFO] Sniff delimiter as ',' [2021-01-23:00:19:17:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:17:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:17:INFO] Sniff delimiter as ',' [2021-01-23:00:19:17:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:18:INFO] Sniff delimiter as ',' [2021-01-23:00:19:18:INFO] Sniff delimiter as ',' [2021-01-23:00:19:18:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:18:INFO] Sniff delimiter as ',' [2021-01-23:00:19:18:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:18:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:18:INFO] Sniff delimiter as ',' [2021-01-23:00:19:18:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:20:INFO] Sniff delimiter as ',' [2021-01-23:00:19:20:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:20:INFO] Sniff delimiter as ',' [2021-01-23:00:19:20:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:20:INFO] Sniff delimiter as ',' [2021-01-23:00:19:20:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:20:INFO] Sniff delimiter as ',' [2021-01-23:00:19:20:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:20:INFO] Sniff delimiter as ',' [2021-01-23:00:19:20:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:20:INFO] Sniff delimiter as ',' [2021-01-23:00:19:20:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:20:INFO] Sniff delimiter as ',' [2021-01-23:00:19:20:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:20:INFO] Sniff delimiter as ',' [2021-01-23:00:19:20:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:22:INFO] Sniff delimiter as ',' [2021-01-23:00:19:22:INFO] Sniff delimiter as ',' [2021-01-23:00:19:22:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:22:INFO] Sniff delimiter as ',' [2021-01-23:00:19:22:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:22:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:22:INFO] Sniff delimiter as ',' [2021-01-23:00:19:22:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:23:INFO] Sniff delimiter as ',' [2021-01-23:00:19:23:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:23:INFO] Sniff delimiter as ',' [2021-01-23:00:19:23:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:23:INFO] Sniff delimiter as ',' [2021-01-23:00:19:23:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:23:INFO] Sniff delimiter as ',' [2021-01-23:00:19:23:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:24:INFO] Sniff delimiter as ',' [2021-01-23:00:19:24:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:25:INFO] Sniff delimiter as ',' [2021-01-23:00:19:24:INFO] Sniff delimiter as ',' [2021-01-23:00:19:24:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:25:INFO] Sniff delimiter as ',' [2021-01-23:00:19:25:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:25:INFO] Sniff delimiter as ',' [2021-01-23:00:19:25:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:25:INFO] Sniff delimiter as ',' [2021-01-23:00:19:25:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:25:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:25:INFO] Sniff delimiter as ',' [2021-01-23:00:19:25:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:25:INFO] Sniff delimiter as ',' [2021-01-23:00:19:25:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:27:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:27:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:28:INFO] Sniff delimiter as ',' [2021-01-23:00:19:28:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:28:INFO] Sniff delimiter as ',' [2021-01-23:00:19:28:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:29:INFO] Sniff delimiter as ',' [2021-01-23:00:19:29:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:29:INFO] Sniff delimiter as ',' [2021-01-23:00:19:29:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:30:INFO] Sniff delimiter as ',' [2021-01-23:00:19:30:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:30:INFO] Sniff delimiter as ',' [2021-01-23:00:19:30:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:30:INFO] Sniff delimiter as ',' [2021-01-23:00:19:30:INFO] Sniff delimiter as ',' [2021-01-23:00:19:30:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:30:INFO] Sniff delimiter as ',' [2021-01-23:00:19:30:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:30:INFO] Sniff delimiter as ',' [2021-01-23:00:19:30:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:30:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:32:INFO] Sniff delimiter as ',' [2021-01-23:00:19:32:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:32:INFO] Sniff delimiter as ',' [2021-01-23:00:19:32:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:32:INFO] Sniff delimiter as ',' [2021-01-23:00:19:32:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:32:INFO] Sniff delimiter as ',' [2021-01-23:00:19:32:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:32:INFO] Sniff delimiter as ',' [2021-01-23:00:19:32:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:32:INFO] Sniff delimiter as ',' [2021-01-23:00:19:32:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:33:INFO] Sniff delimiter as ',' [2021-01-23:00:19:33:INFO] Determined delimiter of CSV input is ',' [2021-01-23:00:19:33:INFO] Sniff delimiter as ',' [2021-01-23:00:19:33:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-651844727492/xgboost-2021-01-23-00-14-01-496/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None new_XV = vectorizer.fit_transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ..................................Arguments: serve [2021-01-23 01:08:41 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-01-23 01:08:41 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-01-23 01:08:41 +0000] [1] [INFO] Using worker: gevent [2021-01-23 01:08:41 +0000] [36] [INFO] Booting worker with pid: 36 [2021-01-23 01:08:41 +0000] [37] [INFO] Booting worker with pid: 37 [2021-01-23:01:08:41:INFO] Model loaded successfully for worker : 36 [2021-01-23:01:08:41:INFO] Model loaded successfully for worker : 37 [2021-01-23 01:08:41 +0000] [38] [INFO] Booting worker with pid: 38 [2021-01-23 01:08:41 +0000] [39] [INFO] Booting worker with pid: 39 [2021-01-23:01:08:41:INFO] Model loaded successfully for worker : 38 [2021-01-23:01:08:41:INFO] Model loaded successfully for worker : 39 [2021-01-23:01:08:41:INFO] Sniff delimiter as ',' [2021-01-23:01:08:41:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:41:INFO] Sniff delimiter as ',' [2021-01-23:01:08:41:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:42:INFO] Sniff delimiter as ',' [2021-01-23:01:08:42:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:42:INFO] Sniff delimiter as ',' [2021-01-23:01:08:42:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:44:INFO] Sniff delimiter as ',' [2021-01-23:01:08:44:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:44:INFO] Sniff delimiter as ',' [2021-01-23:01:08:44:INFO] Sniff delimiter as ',' [2021-01-23:01:08:44:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:44:INFO] Sniff delimiter as ',' [2021-01-23:01:08:44:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:44:INFO] Sniff delimiter as ',' [2021-01-23:01:08:44:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:44:INFO] Sniff delimiter as ',' [2021-01-23:01:08:44:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:44:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:44:INFO] Sniff delimiter as ',' [2021-01-23:01:08:44:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:44:INFO] Sniff delimiter as ',' [2021-01-23:01:08:44:INFO] Determined delimiter of CSV input is ',' 2021-01-23T01:08:41.466:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2021-01-23:01:08:46:INFO] Sniff delimiter as ',' [2021-01-23:01:08:46:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:47:INFO] Sniff delimiter as ',' [2021-01-23:01:08:47:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:47:INFO] Sniff delimiter as ',' [2021-01-23:01:08:47:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:46:INFO] Sniff delimiter as ',' [2021-01-23:01:08:46:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:47:INFO] Sniff delimiter as ',' [2021-01-23:01:08:47:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:47:INFO] Sniff delimiter as ',' [2021-01-23:01:08:47:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:47:INFO] Sniff delimiter as ',' [2021-01-23:01:08:47:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:47:INFO] Sniff delimiter as ',' [2021-01-23:01:08:47:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:49:INFO] Sniff delimiter as ',' [2021-01-23:01:08:49:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:49:INFO] Sniff delimiter as ',' [2021-01-23:01:08:49:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:49:INFO] Sniff delimiter as ',' [2021-01-23:01:08:49:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:49:INFO] Sniff delimiter as ',' [2021-01-23:01:08:49:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:49:INFO] Sniff delimiter as ',' [2021-01-23:01:08:49:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:49:INFO] Sniff delimiter as ',' [2021-01-23:01:08:49:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:49:INFO] Sniff delimiter as ',' [2021-01-23:01:08:49:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:49:INFO] Sniff delimiter as ',' [2021-01-23:01:08:49:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:51:INFO] Sniff delimiter as ',' [2021-01-23:01:08:51:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:51:INFO] Sniff delimiter as ',' [2021-01-23:01:08:51:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:51:INFO] Sniff delimiter as ',' [2021-01-23:01:08:51:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:51:INFO] Sniff delimiter as ',' [2021-01-23:01:08:51:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:51:INFO] Sniff delimiter as ',' [2021-01-23:01:08:51:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:51:INFO] Sniff delimiter as ',' [2021-01-23:01:08:51:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:52:INFO] Sniff delimiter as ',' [2021-01-23:01:08:52:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:52:INFO] Sniff delimiter as ',' [2021-01-23:01:08:52:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:54:INFO] Sniff delimiter as ',' [2021-01-23:01:08:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:54:INFO] Sniff delimiter as ',' [2021-01-23:01:08:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:54:INFO] Sniff delimiter as ',' [2021-01-23:01:08:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:54:INFO] Sniff delimiter as ',' [2021-01-23:01:08:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:54:INFO] Sniff delimiter as ',' [2021-01-23:01:08:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:54:INFO] Sniff delimiter as ',' [2021-01-23:01:08:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:54:INFO] Sniff delimiter as ',' [2021-01-23:01:08:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:54:INFO] Sniff delimiter as ',' [2021-01-23:01:08:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:56:INFO] Sniff delimiter as ',' [2021-01-23:01:08:56:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:56:INFO] Sniff delimiter as ',' [2021-01-23:01:08:56:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:56:INFO] Sniff delimiter as ',' [2021-01-23:01:08:56:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:57:INFO] Sniff delimiter as ',' [2021-01-23:01:08:57:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:56:INFO] Sniff delimiter as ',' [2021-01-23:01:08:56:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:56:INFO] Sniff delimiter as ',' [2021-01-23:01:08:56:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:56:INFO] Sniff delimiter as ',' [2021-01-23:01:08:56:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:57:INFO] Sniff delimiter as ',' [2021-01-23:01:08:57:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:58:INFO] Sniff delimiter as ',' [2021-01-23:01:08:58:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:59:INFO] Sniff delimiter as ',' [2021-01-23:01:08:59:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:58:INFO] Sniff delimiter as ',' [2021-01-23:01:08:58:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:08:59:INFO] Sniff delimiter as ',' [2021-01-23:01:08:59:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:01:INFO] Sniff delimiter as ',' [2021-01-23:01:09:01:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:01:INFO] Sniff delimiter as ',' [2021-01-23:01:09:01:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:01:INFO] Sniff delimiter as ',' [2021-01-23:01:09:01:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:02:INFO] Sniff delimiter as ',' [2021-01-23:01:09:02:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:01:INFO] Sniff delimiter as ',' [2021-01-23:01:09:01:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:01:INFO] Sniff delimiter as ',' [2021-01-23:01:09:01:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:01:INFO] Sniff delimiter as ',' [2021-01-23:01:09:01:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:02:INFO] Sniff delimiter as ',' [2021-01-23:01:09:02:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:03:INFO] Sniff delimiter as ',' [2021-01-23:01:09:03:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:04:INFO] Sniff delimiter as ',' [2021-01-23:01:09:04:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:04:INFO] Sniff delimiter as ',' [2021-01-23:01:09:03:INFO] Sniff delimiter as ',' [2021-01-23:01:09:03:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:04:INFO] Sniff delimiter as ',' [2021-01-23:01:09:04:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:04:INFO] Sniff delimiter as ',' [2021-01-23:01:09:04:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:04:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:04:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:09:04:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-651844727492/xgboost-2021-01-23-01-02-58-939/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2021-01-23-00-05-53-775 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['review', 'mst3k', 'left', 'best', 'part', 'memor', 'scene', 'otherwis', 'dread', 'movi', 'good', 'rape', 'shower', 'scene', 'commit', 'bad', 'guy', 'ben', 'gazzara', 'look', 'alik', 'maria', 'mention', 'kill', 'later', 'j', 'ineptitud', 'perhap', 'rape', 'strong', 'word', 'prison', 'mate', 'ritual', 'may', 'appropri', 'background', 'behind', 'chanc', 'yet', 'forc', 'meet', 'mobster', 'hide', 'ben', 'gazzara', 'introduc', 'girl', 'hang', 'pool', '30', 'ish', 'blond', 'diss', 'villain', 'must', 'quit', 'smitten', 'courtship', 'point', 'first', 'move', 'attempt', 'drown', 'mafia', 'benefactor', 'tell', 'knock', 'kind', 'like', 'girl', 'high', 'school', 'like', 'still', 'want', 'carnal', 'knowledg', 'anyway', 'let', 'say', 'catch', 'cabana', 'later'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:507: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'ghetto', 'spill', 'weari', 'playboy', '21st', 'reincarn', 'victorian'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'dubiou', 'optimist', 'masterson', 'omin', 'banana', 'orchestr', 'sophi'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None xgb_container = get_image_uri(session.boto_region_name, 'xgboost') new_xgb = sagemaker.estimator.Estimator(xgb_container, # The name of the training container role, # The IAM role to use (our current role in this case) train_instance_count=1, # The number of instances to use for training train_instance_type='ml.m4.xlarge', # The type of instance ot use for training output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), # Where to save the output (the model artifacts) sagemaker_session=session) # The current SageMaker session # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, objective='reg:linear', early_stopping_rounds=10, num_round=200) ###Output 'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2. There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example: get_image_uri(region, 'xgboost', '1.0-1'). Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='text/csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_train_location, content_type='text/csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2021-01-23 01:36:01 Starting - Starting the training job... 2021-01-23 01:36:03 Starting - Launching requested ML instances......... 2021-01-23 01:37:35 Starting - Preparing the instances for training...... 2021-01-23 01:38:37 Downloading - Downloading input data... 2021-01-23 01:39:10 Training - Downloading the training image... 2021-01-23 01:39:46 Training - Training image download completed. Training in progress.Arguments: train [2021-01-23:01:39:47:INFO] Running standalone xgboost training. [2021-01-23:01:39:47:INFO] File size need to be processed in the node: 286.16mb. Available memory size in the node: 8435.83mb [2021-01-23:01:39:47:INFO] Determined delimiter of CSV input is ',' [01:39:47] S3DistributionType set as FullyReplicated [01:39:49] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2021-01-23:01:39:49:INFO] Determined delimiter of CSV input is ',' [01:39:49] S3DistributionType set as FullyReplicated [01:39:51] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [01:39:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 26 pruned nodes, max_depth=5 [0]#011train-rmse:0.481747#011validation-rmse:0.481747 Multiple eval metrics have been passed: 'validation-rmse' will be used for early stopping.  Will train until validation-rmse hasn't improved in 10 rounds. [01:39:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 24 pruned nodes, max_depth=5 [1]#011train-rmse:0.469014#011validation-rmse:0.469014 [01:39:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 24 pruned nodes, max_depth=5 [2]#011train-rmse:0.459078#011validation-rmse:0.459078 [01:39:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 30 pruned nodes, max_depth=5 [3]#011train-rmse:0.4509#011validation-rmse:0.4509 [01:40:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 34 pruned nodes, max_depth=5 [4]#011train-rmse:0.445367#011validation-rmse:0.445367 [01:40:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 30 pruned nodes, max_depth=5 [5]#011train-rmse:0.439628#011validation-rmse:0.439628 [01:40:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 26 pruned nodes, max_depth=5 [6]#011train-rmse:0.434286#011validation-rmse:0.434286 [01:40:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 28 pruned nodes, max_depth=5 [7]#011train-rmse:0.428996#011validation-rmse:0.428996 [01:40:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 30 pruned nodes, max_depth=5 [8]#011train-rmse:0.425676#011validation-rmse:0.425676 [01:40:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 30 pruned nodes, max_depth=5 [9]#011train-rmse:0.422002#011validation-rmse:0.422002 [01:40:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 28 pruned nodes, max_depth=5 [10]#011train-rmse:0.41898#011validation-rmse:0.41898 [01:40:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 34 pruned nodes, max_depth=5 [11]#011train-rmse:0.415922#011validation-rmse:0.415922 [01:40:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 32 pruned nodes, max_depth=5 [12]#011train-rmse:0.413381#011validation-rmse:0.413381 [01:40:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 30 pruned nodes, max_depth=5 [13]#011train-rmse:0.410745#011validation-rmse:0.410745 [01:40:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 34 pruned nodes, max_depth=5 [14]#011train-rmse:0.408136#011validation-rmse:0.408136 [01:40:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 30 pruned nodes, max_depth=5 [15]#011train-rmse:0.406052#011validation-rmse:0.406052 [01:40:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 22 pruned nodes, max_depth=5 [16]#011train-rmse:0.404257#011validation-rmse:0.404257 [01:40:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 38 pruned nodes, max_depth=5 [17]#011train-rmse:0.402129#011validation-rmse:0.402129 [01:40:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 30 pruned nodes, max_depth=4 [18]#011train-rmse:0.401288#011validation-rmse:0.401288 [01:40:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 36 pruned nodes, max_depth=5 [19]#011train-rmse:0.399614#011validation-rmse:0.399614 [01:40:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 42 pruned nodes, max_depth=4 [20]#011train-rmse:0.398551#011validation-rmse:0.398551 [01:40:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 36 pruned nodes, max_depth=5 [21]#011train-rmse:0.396397#011validation-rmse:0.396397 [01:40:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 38 pruned nodes, max_depth=5 [22]#011train-rmse:0.395411#011validation-rmse:0.395411 [01:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 38 pruned nodes, max_depth=5 [23]#011train-rmse:0.39451#011validation-rmse:0.39451 [01:40:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 44 pruned nodes, max_depth=3 [24]#011train-rmse:0.393733#011validation-rmse:0.393733 [01:40:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 44 pruned nodes, max_depth=5 [25]#011train-rmse:0.392505#011validation-rmse:0.392505 [01:40:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 30 pruned nodes, max_depth=4 [26]#011train-rmse:0.391927#011validation-rmse:0.391927 [01:40:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 54 pruned nodes, max_depth=2 [27]#011train-rmse:0.391553#011validation-rmse:0.391553 [01:40:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 50 pruned nodes, max_depth=3 [28]#011train-rmse:0.39078#011validation-rmse:0.39078 [01:40:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 48 pruned nodes, max_depth=3 [29]#011train-rmse:0.390213#011validation-rmse:0.390213 [01:40:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 34 pruned nodes, max_depth=5 [30]#011train-rmse:0.389364#011validation-rmse:0.389364 [01:40:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 28 pruned nodes, max_depth=5 [31]#011train-rmse:0.38878#011validation-rmse:0.38878 [01:40:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 34 pruned nodes, max_depth=4 [32]#011train-rmse:0.388102#011validation-rmse:0.388102 [01:40:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 30 pruned nodes, max_depth=5 [33]#011train-rmse:0.387189#011validation-rmse:0.387189 [01:40:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 22 pruned nodes, max_depth=5 [34]#011train-rmse:0.386072#011validation-rmse:0.386072 [01:40:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 34 pruned nodes, max_depth=5 [35]#011train-rmse:0.385422#011validation-rmse:0.385422 [01:40:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 46 pruned nodes, max_depth=0 [36]#011train-rmse:0.385422#011validation-rmse:0.385422 [01:40:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 28 pruned nodes, max_depth=5 [37]#011train-rmse:0.38446#011validation-rmse:0.38446 [01:40:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 40 pruned nodes, max_depth=4 [38]#011train-rmse:0.383882#011validation-rmse:0.383882 [01:40:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 44 pruned nodes, max_depth=2 [39]#011train-rmse:0.383588#011validation-rmse:0.383588 [01:40:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 36 pruned nodes, max_depth=5 [40]#011train-rmse:0.382463#011validation-rmse:0.382463 [01:40:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0 [41]#011train-rmse:0.382463#011validation-rmse:0.382463 [01:40:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 30 pruned nodes, max_depth=0 [42]#011train-rmse:0.382463#011validation-rmse:0.382463 [01:40:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0 [43]#011train-rmse:0.382463#011validation-rmse:0.382463 [01:40:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 40 pruned nodes, max_depth=4 [44]#011train-rmse:0.382009#011validation-rmse:0.382009 [01:40:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 60 pruned nodes, max_depth=0 [45]#011train-rmse:0.382009#011validation-rmse:0.382009 [01:40:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 36 pruned nodes, max_depth=5 [46]#011train-rmse:0.381333#011validation-rmse:0.381333 [01:40:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 48 pruned nodes, max_depth=4 [47]#011train-rmse:0.380824#011validation-rmse:0.380824 [01:40:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 30 pruned nodes, max_depth=4 [48]#011train-rmse:0.380417#011validation-rmse:0.380417 [01:40:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0 [49]#011train-rmse:0.380417#011validation-rmse:0.380417 [01:41:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0 [50]#011train-rmse:0.380416#011validation-rmse:0.380416 [01:41:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 30 pruned nodes, max_depth=0 [51]#011train-rmse:0.380416#011validation-rmse:0.380416 [01:41:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 44 pruned nodes, max_depth=0 [52]#011train-rmse:0.380416#011validation-rmse:0.380416 [01:41:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 28 pruned nodes, max_depth=5 [53]#011train-rmse:0.379174#011validation-rmse:0.379174 [01:41:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 50 pruned nodes, max_depth=0 [54]#011train-rmse:0.379174#011validation-rmse:0.379174 [01:41:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0 [55]#011train-rmse:0.379174#011validation-rmse:0.379174 [01:41:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 52 pruned nodes, max_depth=0 [56]#011train-rmse:0.379174#011validation-rmse:0.379174 [01:41:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 48 pruned nodes, max_depth=5 [57]#011train-rmse:0.378402#011validation-rmse:0.378402 [01:41:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 36 pruned nodes, max_depth=0 [58]#011train-rmse:0.378402#011validation-rmse:0.378402 [01:41:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 38 pruned nodes, max_depth=5 [59]#011train-rmse:0.377653#011validation-rmse:0.377653 [01:41:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0 [60]#011train-rmse:0.377653#011validation-rmse:0.377653 [01:41:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0 [61]#011train-rmse:0.377653#011validation-rmse:0.377653 [01:41:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0 [62]#011train-rmse:0.377653#011validation-rmse:0.377653 [01:41:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 30 pruned nodes, max_depth=0 [63]#011train-rmse:0.377653#011validation-rmse:0.377653 [01:41:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0 [64]#011train-rmse:0.377653#011validation-rmse:0.377653 [01:41:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 40 pruned nodes, max_depth=0 [65]#011train-rmse:0.377653#011validation-rmse:0.377653 [01:41:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 24 pruned nodes, max_depth=5 [66]#011train-rmse:0.377181#011validation-rmse:0.377181 [01:41:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0 [67]#011train-rmse:0.377181#011validation-rmse:0.377181 [01:41:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0 [68]#011train-rmse:0.377181#011validation-rmse:0.377181 [01:41:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 42 pruned nodes, max_depth=0 [69]#011train-rmse:0.377181#011validation-rmse:0.377181 [01:41:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 56 pruned nodes, max_depth=0 [70]#011train-rmse:0.377181#011validation-rmse:0.377181 [01:41:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0 [71]#011train-rmse:0.377181#011validation-rmse:0.377181 [01:41:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 50 pruned nodes, max_depth=0 [72]#011train-rmse:0.377181#011validation-rmse:0.377181 [01:41:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0 [73]#011train-rmse:0.377181#011validation-rmse:0.377181 [01:41:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 34 pruned nodes, max_depth=0 [74]#011train-rmse:0.377181#011validation-rmse:0.377181 [01:41:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 24 pruned nodes, max_depth=0 [75]#011train-rmse:0.377181#011validation-rmse:0.377181 [01:41:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 54 pruned nodes, max_depth=0 [76]#011train-rmse:0.377181#011validation-rmse:0.377181 Stopping. Best iteration: [66]#011train-rmse:0.377181#011validation-rmse:0.377181  2021-01-23 01:41:46 Uploading - Uploading generated training model 2021-01-23 01:41:46 Completed - Training job completed Training seconds: 189 Billable seconds: 189 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ................................2021-01-23T01:55:46.009:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2021-01-23 01:55:45 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-01-23 01:55:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-01-23 01:55:45 +0000] [1] [INFO] Using worker: gevent [2021-01-23 01:55:45 +0000] [36] [INFO] Booting worker with pid: 36 [2021-01-23 01:55:45 +0000] [37] [INFO] Booting worker with pid: 37 [2021-01-23:01:55:45:INFO] Model loaded successfully for worker : 36 [2021-01-23 01:55:46 +0000] [38] [INFO] Booting worker with pid: 38 [2021-01-23 01:55:46 +0000] [39] [INFO] Booting worker with pid: 39 [2021-01-23:01:55:46:INFO] Model loaded successfully for worker : 37 [2021-01-23:01:55:46:INFO] Model loaded successfully for worker : 38 [2021-01-23:01:55:46:INFO] Model loaded successfully for worker : 39 Arguments: serve [2021-01-23 01:55:45 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-01-23 01:55:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-01-23 01:55:45 +0000] [1] [INFO] Using worker: gevent [2021-01-23 01:55:45 +0000] [36] [INFO] Booting worker with pid: 36 [2021-01-23 01:55:45 +0000] [37] [INFO] Booting worker with pid: 37 [2021-01-23:01:55:45:INFO] Model loaded successfully for worker : 36 [2021-01-23 01:55:46 +0000] [38] [INFO] Booting worker with pid: 38 [2021-01-23 01:55:46 +0000] [39] [INFO] Booting worker with pid: 39 [2021-01-23:01:55:46:INFO] Model loaded successfully for worker : 37 [2021-01-23:01:55:46:INFO] Model loaded successfully for worker : 38 [2021-01-23:01:55:46:INFO] Model loaded successfully for worker : 39 [2021-01-23:01:55:46:INFO] Sniff delimiter as ',' [2021-01-23:01:55:46:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:46:INFO] Sniff delimiter as ',' [2021-01-23:01:55:46:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:46:INFO] Sniff delimiter as ',' [2021-01-23:01:55:46:INFO] Sniff delimiter as ',' [2021-01-23:01:55:46:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:46:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:46:INFO] Sniff delimiter as ',' [2021-01-23:01:55:46:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:46:INFO] Sniff delimiter as ',' [2021-01-23:01:55:46:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:46:INFO] Sniff delimiter as ',' [2021-01-23:01:55:46:INFO] Sniff delimiter as ',' [2021-01-23:01:55:46:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:46:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:48:INFO] Sniff delimiter as ',' [2021-01-23:01:55:48:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:49:INFO] Sniff delimiter as ',' [2021-01-23:01:55:49:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:49:INFO] Sniff delimiter as ',' [2021-01-23:01:55:49:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:48:INFO] Sniff delimiter as ',' [2021-01-23:01:55:48:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:49:INFO] Sniff delimiter as ',' [2021-01-23:01:55:49:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:49:INFO] Sniff delimiter as ',' [2021-01-23:01:55:49:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:50:INFO] Sniff delimiter as ',' [2021-01-23:01:55:50:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:50:INFO] Sniff delimiter as ',' [2021-01-23:01:55:50:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:50:INFO] Sniff delimiter as ',' [2021-01-23:01:55:50:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:50:INFO] Sniff delimiter as ',' [2021-01-23:01:55:50:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:51:INFO] Sniff delimiter as ',' [2021-01-23:01:55:51:INFO] Sniff delimiter as ',' [2021-01-23:01:55:51:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:52:INFO] Sniff delimiter as ',' [2021-01-23:01:55:52:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:52:INFO] Sniff delimiter as ',' [2021-01-23:01:55:52:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:51:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:52:INFO] Sniff delimiter as ',' [2021-01-23:01:55:52:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:52:INFO] Sniff delimiter as ',' [2021-01-23:01:55:52:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:52:INFO] Sniff delimiter as ',' [2021-01-23:01:55:52:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:52:INFO] Sniff delimiter as ',' [2021-01-23:01:55:52:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:54:INFO] Sniff delimiter as ',' [2021-01-23:01:55:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:54:INFO] Sniff delimiter as ',' [2021-01-23:01:55:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:54:INFO] Sniff delimiter as ',' [2021-01-23:01:55:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:54:INFO] Sniff delimiter as ',' [2021-01-23:01:55:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:54:INFO] Sniff delimiter as ',' [2021-01-23:01:55:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:54:INFO] Sniff delimiter as ',' [2021-01-23:01:55:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:54:INFO] Sniff delimiter as ',' [2021-01-23:01:55:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:54:INFO] Sniff delimiter as ',' [2021-01-23:01:55:54:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:57:INFO] Sniff delimiter as ',' [2021-01-23:01:55:57:INFO] Sniff delimiter as ',' [2021-01-23:01:55:57:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:57:INFO] Sniff delimiter as ',' [2021-01-23:01:55:57:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:57:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:57:INFO] Sniff delimiter as ',' [2021-01-23:01:55:57:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:57:INFO] Sniff delimiter as ',' [2021-01-23:01:55:57:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:57:INFO] Sniff delimiter as ',' [2021-01-23:01:55:57:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:59:INFO] Sniff delimiter as ',' [2021-01-23:01:55:59:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:59:INFO] Sniff delimiter as ',' [2021-01-23:01:55:59:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:59:INFO] Sniff delimiter as ',' [2021-01-23:01:55:59:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:59:INFO] Sniff delimiter as ',' [2021-01-23:01:55:59:INFO] Sniff delimiter as ',' [2021-01-23:01:55:59:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:59:INFO] Sniff delimiter as ',' [2021-01-23:01:55:59:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:59:INFO] Sniff delimiter as ',' [2021-01-23:01:55:59:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:59:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:55:59:INFO] Sniff delimiter as ',' [2021-01-23:01:55:59:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:01:INFO] Sniff delimiter as ',' [2021-01-23:01:56:01:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:01:INFO] Sniff delimiter as ',' [2021-01-23:01:56:01:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:01:INFO] Sniff delimiter as ',' [2021-01-23:01:56:01:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:01:INFO] Sniff delimiter as ',' [2021-01-23:01:56:01:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:02:INFO] Sniff delimiter as ',' [2021-01-23:01:56:02:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:02:INFO] Sniff delimiter as ',' [2021-01-23:01:56:02:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:02:INFO] Sniff delimiter as ',' [2021-01-23:01:56:02:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:02:INFO] Sniff delimiter as ',' [2021-01-23:01:56:02:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:04:INFO] Sniff delimiter as ',' [2021-01-23:01:56:04:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:04:INFO] Sniff delimiter as ',' [2021-01-23:01:56:04:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:04:INFO] Sniff delimiter as ',' [2021-01-23:01:56:04:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:04:INFO] Sniff delimiter as ',' [2021-01-23:01:56:04:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:04:INFO] Sniff delimiter as ',' [2021-01-23:01:56:04:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:04:INFO] Sniff delimiter as ',' [2021-01-23:01:56:04:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:04:INFO] Sniff delimiter as ',' [2021-01-23:01:56:04:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:04:INFO] Sniff delimiter as ',' [2021-01-23:01:56:04:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:09:INFO] Sniff delimiter as ',' [2021-01-23:01:56:09:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:09:INFO] Sniff delimiter as ',' [2021-01-23:01:56:09:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:09:INFO] Sniff delimiter as ',' [2021-01-23:01:56:09:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:09:INFO] Sniff delimiter as ',' [2021-01-23:01:56:09:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:09:INFO] Sniff delimiter as ',' [2021-01-23:01:56:09:INFO] Determined delimiter of CSV input is ',' [2021-01-23:01:56:09:INFO] Sniff delimiter as ',' [2021-01-23:01:56:09:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-651844727492/xgboost-2021-01-23-01-50-36-519/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. # test_X = None test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None new_xgb_endpoint_config_name = "IMDB-xgboost-update-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. # session.sagemaker_client.update_endpoint(EndpointName='xgboost-2021-01-23-01-36-01-193', # EndpointConfigName=new_xgb_endpoint_config_info) session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output -------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. ###Code # Make sure that we use SageMaker 1.x !pip install sagemaker==1.72.0 ###Output Requirement already satisfied: sagemaker==1.72.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.72.0) Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5) Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.14.0) Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5) Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.4.0) Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.8) Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1) Requirement already satisfied: smdebug-rulesconfig==0.1.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.4) Requirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.16.63) Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.4) Requirement already satisfied: botocore<1.20.0,>=1.19.63 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.63) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0) Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.63->boto3>=1.14.12->sagemaker==1.72.0) (1.26.2) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.63->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1) Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3) Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0) Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7) Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0) WARNING: You are using pip version 20.3.3; however, version 21.0.1 is available. You should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python -m pip install --upgrade pip' command. ###Markdown Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2021-02-17 18:28:45-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 10.5MB/s in 10s 2021-02-17 18:28:55 (8.02 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2021-02-17 18:30:52 Starting - Starting the training job... 2021-02-17 18:30:54 Starting - Launching requested ML instances...... 2021-02-17 18:32:02 Starting - Preparing the instances for training... 2021-02-17 18:32:52 Downloading - Downloading input data...... 2021-02-17 18:33:49 Training - Training image download completed. Training in progress..Arguments: train [2021-02-17:18:33:50:INFO] Running standalone xgboost training. [2021-02-17:18:33:50:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8425.98mb [2021-02-17:18:33:50:INFO] Determined delimiter of CSV input is ',' [18:33:50] S3DistributionType set as FullyReplicated [18:33:52] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2021-02-17:18:33:52:INFO] Determined delimiter of CSV input is ',' [18:33:52] S3DistributionType set as FullyReplicated [18:33:53] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [18:33:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [0]#011train-error:0.2894#011validation-error:0.3071 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [18:33:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.274867#011validation-error:0.2877 [18:33:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 12 pruned nodes, max_depth=5 [2]#011train-error:0.273867#011validation-error:0.29 [18:34:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.271733#011validation-error:0.2852 [18:34:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [4]#011train-error:0.260467#011validation-error:0.2748 [18:34:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.248933#011validation-error:0.2673 [18:34:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.246267#011validation-error:0.2642 [18:34:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.241867#011validation-error:0.2585 [18:34:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [8]#011train-error:0.2324#011validation-error:0.2475 [18:34:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [9]#011train-error:0.224333#011validation-error:0.2432 [18:34:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [10]#011train-error:0.2234#011validation-error:0.2412 [18:34:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [11]#011train-error:0.2212#011validation-error:0.2398 [18:34:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [12]#011train-error:0.215267#011validation-error:0.2352 [18:34:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [13]#011train-error:0.208867#011validation-error:0.231 [18:34:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [14]#011train-error:0.203933#011validation-error:0.2253 [18:34:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [15]#011train-error:0.2022#011validation-error:0.224 [18:34:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 0 pruned nodes, max_depth=5 [16]#011train-error:0.199133#011validation-error:0.2204 [18:34:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [17]#011train-error:0.194733#011validation-error:0.2182 [18:34:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [18]#011train-error:0.194333#011validation-error:0.2161 [18:34:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [19]#011train-error:0.1904#011validation-error:0.2124 [18:34:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 16 pruned nodes, max_depth=5 [20]#011train-error:0.189067#011validation-error:0.2101 [18:34:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 16 pruned nodes, max_depth=5 [21]#011train-error:0.1854#011validation-error:0.2058 [18:34:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.182133#011validation-error:0.2032 [18:34:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [23]#011train-error:0.1798#011validation-error:0.2038 [18:34:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [24]#011train-error:0.1778#011validation-error:0.2008 [18:34:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 18 pruned nodes, max_depth=5 [25]#011train-error:0.177133#011validation-error:0.201 [18:34:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [26]#011train-error:0.1752#011validation-error:0.1994 [18:34:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 16 pruned nodes, max_depth=5 [27]#011train-error:0.1718#011validation-error:0.1964 [18:34:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.170667#011validation-error:0.1966 [18:34:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [29]#011train-error:0.166467#011validation-error:0.1938 [18:34:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.164867#011validation-error:0.1922 [18:34:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [31]#011train-error:0.161333#011validation-error:0.1911 [18:34:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.159867#011validation-error:0.1891 [18:34:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [33]#011train-error:0.158133#011validation-error:0.189 [18:34:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [34]#011train-error:0.156733#011validation-error:0.1866 [18:34:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.155733#011validation-error:0.184 [18:34:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 12 pruned nodes, max_depth=5 [36]#011train-error:0.154467#011validation-error:0.1829 [18:34:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.154#011validation-error:0.1824 [18:34:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [38]#011train-error:0.1522#011validation-error:0.1825 [18:34:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [39]#011train-error:0.1496#011validation-error:0.1821 [18:34:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [40]#011train-error:0.1478#011validation-error:0.1808 [18:34:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [41]#011train-error:0.145067#011validation-error:0.1799 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .............................2021-02-17T18:42:47.836:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2021-02-17 18:42:47 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-17 18:42:47 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-17 18:42:47 +0000] [1] [INFO] Using worker: gevent [2021-02-17 18:42:47 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-17 18:42:47 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-17 18:42:47 +0000] [39] [INFO] Booting worker with pid: 39 Arguments: serve [2021-02-17 18:42:47 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-17 18:42:47 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-17 18:42:47 +0000] [1] [INFO] Using worker: gevent [2021-02-17 18:42:47 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-17 18:42:47 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-17 18:42:47 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-17 18:42:47 +0000] [40] [INFO] Booting worker with pid: 40 [2021-02-17:18:42:47:INFO] Model loaded successfully for worker : 37 [2021-02-17:18:42:47:INFO] Model loaded successfully for worker : 39 [2021-02-17:18:42:47:INFO] Model loaded successfully for worker : 38 [2021-02-17:18:42:47:INFO] Model loaded successfully for worker : 40 [2021-02-17:18:42:48:INFO] Sniff delimiter as ',' [2021-02-17:18:42:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:48:INFO] Sniff delimiter as ',' [2021-02-17:18:42:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:48:INFO] Sniff delimiter as ',' [2021-02-17:18:42:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:48:INFO] Sniff delimiter as ',' [2021-02-17:18:42:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17 18:42:47 +0000] [40] [INFO] Booting worker with pid: 40 [2021-02-17:18:42:47:INFO] Model loaded successfully for worker : 37 [2021-02-17:18:42:47:INFO] Model loaded successfully for worker : 39 [2021-02-17:18:42:47:INFO] Model loaded successfully for worker : 38 [2021-02-17:18:42:47:INFO] Model loaded successfully for worker : 40 [2021-02-17:18:42:48:INFO] Sniff delimiter as ',' [2021-02-17:18:42:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:48:INFO] Sniff delimiter as ',' [2021-02-17:18:42:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:48:INFO] Sniff delimiter as ',' [2021-02-17:18:42:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:48:INFO] Sniff delimiter as ',' [2021-02-17:18:42:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:50:INFO] Sniff delimiter as ',' [2021-02-17:18:42:50:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:50:INFO] Sniff delimiter as ',' [2021-02-17:18:42:50:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:50:INFO] Sniff delimiter as ',' [2021-02-17:18:42:50:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:51:INFO] Sniff delimiter as ',' [2021-02-17:18:42:51:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:50:INFO] Sniff delimiter as ',' [2021-02-17:18:42:50:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:50:INFO] Sniff delimiter as ',' [2021-02-17:18:42:50:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:50:INFO] Sniff delimiter as ',' [2021-02-17:18:42:50:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:51:INFO] Sniff delimiter as ',' [2021-02-17:18:42:51:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:53:INFO] Sniff delimiter as ',' [2021-02-17:18:42:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:53:INFO] Sniff delimiter as ',' [2021-02-17:18:42:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:53:INFO] Sniff delimiter as ',' [2021-02-17:18:42:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:53:INFO] Sniff delimiter as ',' [2021-02-17:18:42:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:53:INFO] Sniff delimiter as ',' [2021-02-17:18:42:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:53:INFO] Sniff delimiter as ',' [2021-02-17:18:42:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:53:INFO] Sniff delimiter as ',' [2021-02-17:18:42:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:53:INFO] Sniff delimiter as ',' [2021-02-17:18:42:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:56:INFO] Sniff delimiter as ',' [2021-02-17:18:42:56:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:56:INFO] Sniff delimiter as ',' [2021-02-17:18:42:56:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:58:INFO] Sniff delimiter as ',' [2021-02-17:18:42:58:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:58:INFO] Sniff delimiter as ',' [2021-02-17:18:42:58:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:58:INFO] Sniff delimiter as ',' [2021-02-17:18:42:58:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:58:INFO] Sniff delimiter as ',' [2021-02-17:18:42:58:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:58:INFO] Sniff delimiter as ',' [2021-02-17:18:42:58:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:58:INFO] Sniff delimiter as ',' [2021-02-17:18:42:58:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:58:INFO] Sniff delimiter as ',' [2021-02-17:18:42:58:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:42:58:INFO] Sniff delimiter as ',' [2021-02-17:18:42:58:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:00:INFO] Sniff delimiter as ',' [2021-02-17:18:43:00:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:00:INFO] Sniff delimiter as ',' [2021-02-17:18:43:00:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:00:INFO] Sniff delimiter as ',' [2021-02-17:18:43:00:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:01:INFO] Sniff delimiter as ',' [2021-02-17:18:43:01:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:00:INFO] Sniff delimiter as ',' [2021-02-17:18:43:00:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:00:INFO] Sniff delimiter as ',' [2021-02-17:18:43:00:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:00:INFO] Sniff delimiter as ',' [2021-02-17:18:43:00:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:01:INFO] Sniff delimiter as ',' [2021-02-17:18:43:01:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:03:INFO] Sniff delimiter as ',' [2021-02-17:18:43:03:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:03:INFO] Sniff delimiter as ',' [2021-02-17:18:43:03:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:03:INFO] Sniff delimiter as ',' [2021-02-17:18:43:03:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:03:INFO] Sniff delimiter as ',' [2021-02-17:18:43:03:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:03:INFO] Sniff delimiter as ',' [2021-02-17:18:43:03:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:03:INFO] Sniff delimiter as ',' [2021-02-17:18:43:03:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:03:INFO] Sniff delimiter as ',' [2021-02-17:18:43:03:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:43:03:INFO] Sniff delimiter as ',' [2021-02-17:18:43:03:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.0 KiB (2.1 MiB/s) with 1 file(s) remaining Completed 370.0 KiB/370.0 KiB (3.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-central-1-941012658317/xgboost-2021-02-17-18-38-09-176/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_data_location = None new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location,content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output .................................2021-02-17T18:50:45.240:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2021-02-17 18:50:45 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-17 18:50:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-17 18:50:45 +0000] [1] [INFO] Using worker: gevent [2021-02-17 18:50:45 +0000] [36] [INFO] Booting worker with pid: 36 [2021-02-17 18:50:45 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-17:18:50:45:INFO] Model loaded successfully for worker : 36 [2021-02-17 18:50:45 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-17 18:50:45 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-17:18:50:45:INFO] Model loaded successfully for worker : 37 [2021-02-17:18:50:45:INFO] Model loaded successfully for worker : 38 Arguments: serve [2021-02-17 18:50:45 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-17 18:50:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-17 18:50:45 +0000] [1] [INFO] Using worker: gevent [2021-02-17 18:50:45 +0000] [36] [INFO] Booting worker with pid: 36 [2021-02-17 18:50:45 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-17:18:50:45:INFO] Model loaded successfully for worker : 36 [2021-02-17 18:50:45 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-17 18:50:45 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-17:18:50:45:INFO] Model loaded successfully for worker : 37 [2021-02-17:18:50:45:INFO] Model loaded successfully for worker : 38 [2021-02-17:18:50:45:INFO] Model loaded successfully for worker : 39 [2021-02-17:18:50:45:INFO] Sniff delimiter as ',' [2021-02-17:18:50:45:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:45:INFO] Sniff delimiter as ',' [2021-02-17:18:50:45:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:45:INFO] Sniff delimiter as ',' [2021-02-17:18:50:45:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:45:INFO] Sniff delimiter as ',' [2021-02-17:18:50:45:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:45:INFO] Model loaded successfully for worker : 39 [2021-02-17:18:50:45:INFO] Sniff delimiter as ',' [2021-02-17:18:50:45:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:45:INFO] Sniff delimiter as ',' [2021-02-17:18:50:45:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:45:INFO] Sniff delimiter as ',' [2021-02-17:18:50:45:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:45:INFO] Sniff delimiter as ',' [2021-02-17:18:50:45:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:48:INFO] Sniff delimiter as ',' [2021-02-17:18:50:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:48:INFO] Sniff delimiter as ',' [2021-02-17:18:50:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:48:INFO] Sniff delimiter as ',' [2021-02-17:18:50:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:48:INFO] Sniff delimiter as ',' [2021-02-17:18:50:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:48:INFO] Sniff delimiter as ',' [2021-02-17:18:50:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:48:INFO] Sniff delimiter as ',' [2021-02-17:18:50:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:48:INFO] Sniff delimiter as ',' [2021-02-17:18:50:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:48:INFO] Sniff delimiter as ',' [2021-02-17:18:50:48:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:50:INFO] Sniff delimiter as ',' [2021-02-17:18:50:50:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:50:INFO] Sniff delimiter as ',' [2021-02-17:18:50:50:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:50:INFO] Sniff delimiter as ',' [2021-02-17:18:50:50:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:51:INFO] Sniff delimiter as ',' [2021-02-17:18:50:50:INFO] Sniff delimiter as ',' [2021-02-17:18:50:50:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:50:INFO] Sniff delimiter as ',' [2021-02-17:18:50:50:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:50:INFO] Sniff delimiter as ',' [2021-02-17:18:50:50:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:51:INFO] Sniff delimiter as ',' [2021-02-17:18:50:51:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:51:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:53:INFO] Sniff delimiter as ',' [2021-02-17:18:50:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:53:INFO] Sniff delimiter as ',' [2021-02-17:18:50:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:53:INFO] Sniff delimiter as ',' [2021-02-17:18:50:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:53:INFO] Sniff delimiter as ',' [2021-02-17:18:50:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:53:INFO] Sniff delimiter as ',' [2021-02-17:18:50:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:53:INFO] Sniff delimiter as ',' [2021-02-17:18:50:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:53:INFO] Sniff delimiter as ',' [2021-02-17:18:50:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:53:INFO] Sniff delimiter as ',' [2021-02-17:18:50:53:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:55:INFO] Sniff delimiter as ',' [2021-02-17:18:50:55:INFO] Sniff delimiter as ',' [2021-02-17:18:50:55:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:55:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:55:INFO] Sniff delimiter as ',' [2021-02-17:18:50:55:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:55:INFO] Sniff delimiter as ',' [2021-02-17:18:50:55:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:56:INFO] Sniff delimiter as ',' [2021-02-17:18:50:56:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:55:INFO] Sniff delimiter as ',' [2021-02-17:18:50:55:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:55:INFO] Sniff delimiter as ',' [2021-02-17:18:50:55:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:56:INFO] Sniff delimiter as ',' [2021-02-17:18:50:56:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:57:INFO] Sniff delimiter as ',' [2021-02-17:18:50:57:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:57:INFO] Sniff delimiter as ',' [2021-02-17:18:50:57:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:58:INFO] Sniff delimiter as ',' [2021-02-17:18:50:58:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:57:INFO] Sniff delimiter as ',' [2021-02-17:18:50:57:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:57:INFO] Sniff delimiter as ',' [2021-02-17:18:50:57:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:58:INFO] Sniff delimiter as ',' [2021-02-17:18:50:58:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:58:INFO] Sniff delimiter as ',' [2021-02-17:18:50:58:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:50:58:INFO] Sniff delimiter as ',' [2021-02-17:18:50:58:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:51:00:INFO] Sniff delimiter as ',' [2021-02-17:18:51:00:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:51:00:INFO] Sniff delimiter as ',' [2021-02-17:18:51:00:INFO] Sniff delimiter as ',' [2021-02-17:18:51:00:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:51:00:INFO] Sniff delimiter as ',' [2021-02-17:18:51:00:INFO] Determined delimiter of CSV input is ',' [2021-02-17:18:51:00:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.2 KiB (3.5 MiB/s) with 1 file(s) remaining Completed 370.2 KiB/370.2 KiB (5.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-central-1-941012658317/xgboost-2021-02-17-18-45-29-579/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2021-02-17-18-30-52-676 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['scott', 'henderson', 'alan', 'curti', 'unjustli', 'accus', 'kill', 'unfaith', 'wife', 'night', 'murder', 'mysteri', 'woman', 'refus', 'give', 'name', 'accus', 'murder', 'polic', 'visit', 'place', 'peopl', 'rememb', 'alon', 'sentenc', 'die', 'secretari', 'ella', 'rain', 'set', 'find', 'murder', 'love', 'made', 'b', 'film', 'univers', 'look', 'cast', 'charact', 'actor', 'one', 'star', 'franchot', 'tone', 'declin', 'budget', 'small', 'cast', 'mostli', 'unknown', 'came', 'one', 'best', 'film', 'noir', '1940', 'beauti', 'direct', 'curt', 'siodmak', 'fantast', 'script', 'came', 'excel', 'book', 'william', 'irish', 'pen', 'name', 'cornel', 'woolrich', 'move', 'quickli', 'look', 'fantast', 'infam', 'jam', 'session', 'rain', 'elisha', 'cook', 'jr', 'come', 'screen', 'incred', 'sexual', 'energi', 'surpris', 'censor', 'cut', 'flaw', 'prevent', 'perfect', 'tone', 'give', 'dread', 'perform', 'look', 'ghastli', 'horribl', 'also', 'curti', 'stiff', 'bland', 'henderson', 'realli', 'wonder', 'rain', 'love', 'unemot', 'rain', 'pretti', 'good', 'lead', 'role', 'pretti', 'full', 'life', 'also', 'last', 'scene', 'murder', 'never', 'ring', 'true', 'hardli', 'threat', 'physic', 'reaction', 'seem', 'overst', 'still', 'give', '9', 'realli', 'great', 'film', 'flaw', 'asid', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:507: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'ghetto', 'victorian', 'weari', 'reincarn', 'playboy', 'spill', '21st'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'masterson', 'orchestr', 'sophi', 'optimist', 'dubiou', 'omin', 'banana'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. prefix='new_sagem' new_data_location = None new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = None new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = None new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None new_xgb = xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = None s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train':s3_new_input_train, 'validation':s3_new_input_validation}) ###Output 2021-02-17 19:08:39 Starting - Starting the training job... 2021-02-17 19:08:42 Starting - Launching requested ML instances......... 2021-02-17 19:10:13 Starting - Preparing the instances for training... 2021-02-17 19:11:03 Downloading - Downloading input data... 2021-02-17 19:11:39 Training - Training image download completed. Training in progress..Arguments: train [2021-02-17:19:11:40:INFO] Running standalone xgboost training. [2021-02-17:19:11:40:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8428.81mb [2021-02-17:19:11:40:INFO] Determined delimiter of CSV input is ',' [19:11:40] S3DistributionType set as FullyReplicated [19:11:42] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2021-02-17:19:11:42:INFO] Determined delimiter of CSV input is ',' [19:11:42] S3DistributionType set as FullyReplicated [19:11:43] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [19:11:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [0]#011train-error:0.295467#011validation-error:0.3102 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [19:11:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.292733#011validation-error:0.3071 [19:11:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [2]#011train-error:0.2816#011validation-error:0.2939 [19:11:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.2756#011validation-error:0.2898 [19:11:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [4]#011train-error:0.272267#011validation-error:0.2869 [19:11:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.266933#011validation-error:0.2821 [19:11:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.260467#011validation-error:0.2758 [19:11:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.254533#011validation-error:0.2705 [19:11:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.248067#011validation-error:0.2603 [19:11:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.240067#011validation-error:0.2539 [19:12:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [10]#011train-error:0.2324#011validation-error:0.2467 [19:12:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.2272#011validation-error:0.2417 [19:12:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [12]#011train-error:0.223667#011validation-error:0.2417 [19:12:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.220267#011validation-error:0.2384 [19:12:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [14]#011train-error:0.212667#011validation-error:0.2338 [19:12:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 14 pruned nodes, max_depth=5 [15]#011train-error:0.209467#011validation-error:0.2304 [19:12:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [16]#011train-error:0.209067#011validation-error:0.2261 [19:12:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [17]#011train-error:0.2048#011validation-error:0.2243 [19:12:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [18]#011train-error:0.202467#011validation-error:0.2215 [19:12:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [19]#011train-error:0.201#011validation-error:0.2198 [19:12:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [20]#011train-error:0.198267#011validation-error:0.2184 [19:12:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.1962#011validation-error:0.2166 [19:12:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [22]#011train-error:0.192133#011validation-error:0.2155 [19:12:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [23]#011train-error:0.1904#011validation-error:0.2162 [19:12:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [24]#011train-error:0.188067#011validation-error:0.2151 [19:12:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [25]#011train-error:0.187133#011validation-error:0.2147 [19:12:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [26]#011train-error:0.1852#011validation-error:0.2134 [19:12:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [27]#011train-error:0.182667#011validation-error:0.2113 [19:12:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.180467#011validation-error:0.2098 [19:12:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.178533#011validation-error:0.207 [19:12:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [30]#011train-error:0.177267#011validation-error:0.2058 [19:12:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 16 pruned nodes, max_depth=5 [31]#011train-error:0.176267#011validation-error:0.2054 [19:12:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [32]#011train-error:0.174333#011validation-error:0.204 [19:12:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [33]#011train-error:0.171733#011validation-error:0.2028 [19:12:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.1712#011validation-error:0.2039 [19:12:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.1696#011validation-error:0.2012 [19:12:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [36]#011train-error:0.168467#011validation-error:0.2007 [19:12:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [37]#011train-error:0.166933#011validation-error:0.2001 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None new_xgb_transformer = new_xgb.transformer(instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ...............................2021-02-17T19:22:12.602:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2021-02-17 19:22:12 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-17 19:22:12 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-17 19:22:12 +0000] [1] [INFO] Using worker: gevent Arguments: serve [2021-02-17 19:22:12 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-17 19:22:12 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-17 19:22:12 +0000] [1] [INFO] Using worker: gevent [2021-02-17 19:22:12 +0000] [36] [INFO] Booting worker with pid: 36 [2021-02-17 19:22:12 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-17:19:22:12:INFO] Model loaded successfully for worker : 36 [2021-02-17 19:22:12 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-17:19:22:12:INFO] Model loaded successfully for worker : 37 [2021-02-17 19:22:12 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-17:19:22:12:INFO] Model loaded successfully for worker : 38 [2021-02-17:19:22:12:INFO] Model loaded successfully for worker : 39 [2021-02-17:19:22:12:INFO] Sniff delimiter as ',' [2021-02-17:19:22:12:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:13:INFO] Sniff delimiter as ',' [2021-02-17:19:22:13:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:13:INFO] Sniff delimiter as ',' [2021-02-17:19:22:13:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:13:INFO] Sniff delimiter as ',' [2021-02-17:19:22:13:INFO] Determined delimiter of CSV input is ',' [2021-02-17 19:22:12 +0000] [36] [INFO] Booting worker with pid: 36 [2021-02-17 19:22:12 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-17:19:22:12:INFO] Model loaded successfully for worker : 36 [2021-02-17 19:22:12 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-17:19:22:12:INFO] Model loaded successfully for worker : 37 [2021-02-17 19:22:12 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-17:19:22:12:INFO] Model loaded successfully for worker : 38 [2021-02-17:19:22:12:INFO] Model loaded successfully for worker : 39 [2021-02-17:19:22:12:INFO] Sniff delimiter as ',' [2021-02-17:19:22:12:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:13:INFO] Sniff delimiter as ',' [2021-02-17:19:22:13:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:13:INFO] Sniff delimiter as ',' [2021-02-17:19:22:13:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:13:INFO] Sniff delimiter as ',' [2021-02-17:19:22:13:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:15:INFO] Sniff delimiter as ',' [2021-02-17:19:22:15:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:15:INFO] Sniff delimiter as ',' [2021-02-17:19:22:15:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:15:INFO] Sniff delimiter as ',' [2021-02-17:19:22:15:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:16:INFO] Sniff delimiter as ',' [2021-02-17:19:22:16:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:15:INFO] Sniff delimiter as ',' [2021-02-17:19:22:15:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:15:INFO] Sniff delimiter as ',' [2021-02-17:19:22:15:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:15:INFO] Sniff delimiter as ',' [2021-02-17:19:22:15:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:16:INFO] Sniff delimiter as ',' [2021-02-17:19:22:16:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:18:INFO] Sniff delimiter as ',' [2021-02-17:19:22:18:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:18:INFO] Sniff delimiter as ',' [2021-02-17:19:22:18:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:18:INFO] Sniff delimiter as ',' [2021-02-17:19:22:18:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:18:INFO] Sniff delimiter as ',' [2021-02-17:19:22:18:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:18:INFO] Sniff delimiter as ',' [2021-02-17:19:22:18:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:18:INFO] Sniff delimiter as ',' [2021-02-17:19:22:18:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:18:INFO] Sniff delimiter as ',' [2021-02-17:19:22:18:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:18:INFO] Sniff delimiter as ',' [2021-02-17:19:22:18:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:20:INFO] Sniff delimiter as ',' [2021-02-17:19:22:20:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:20:INFO] Sniff delimiter as ',' [2021-02-17:19:22:20:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:21:INFO] Sniff delimiter as ',' [2021-02-17:19:22:20:INFO] Sniff delimiter as ',' [2021-02-17:19:22:20:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:20:INFO] Sniff delimiter as ',' [2021-02-17:19:22:20:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:21:INFO] Sniff delimiter as ',' [2021-02-17:19:22:21:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:21:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:23:INFO] Sniff delimiter as ',' [2021-02-17:19:22:23:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:23:INFO] Sniff delimiter as ',' [2021-02-17:19:22:23:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:23:INFO] Sniff delimiter as ',' [2021-02-17:19:22:23:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:23:INFO] Sniff delimiter as ',' [2021-02-17:19:22:23:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:23:INFO] Sniff delimiter as ',' [2021-02-17:19:22:23:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:23:INFO] Sniff delimiter as ',' [2021-02-17:19:22:23:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:23:INFO] Sniff delimiter as ',' [2021-02-17:19:22:23:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:23:INFO] Sniff delimiter as ',' [2021-02-17:19:22:23:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:25:INFO] Sniff delimiter as ',' [2021-02-17:19:22:25:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:25:INFO] Sniff delimiter as ',' [2021-02-17:19:22:25:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:25:INFO] Sniff delimiter as ',' [2021-02-17:19:22:25:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:25:INFO] Sniff delimiter as ',' [2021-02-17:19:22:25:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:25:INFO] Sniff delimiter as ',' [2021-02-17:19:22:25:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:25:INFO] Sniff delimiter as ',' [2021-02-17:19:22:25:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:26:INFO] Sniff delimiter as ',' [2021-02-17:19:22:26:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:26:INFO] Sniff delimiter as ',' [2021-02-17:19:22:26:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:27:INFO] Sniff delimiter as ',' [2021-02-17:19:22:27:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:27:INFO] Sniff delimiter as ',' [2021-02-17:19:22:27:INFO] Sniff delimiter as ',' [2021-02-17:19:22:27:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:27:INFO] Sniff delimiter as ',' [2021-02-17:19:22:27:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:28:INFO] Sniff delimiter as ',' [2021-02-17:19:22:28:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:27:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:28:INFO] Sniff delimiter as ',' [2021-02-17:19:22:28:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:28:INFO] Sniff delimiter as ',' [2021-02-17:19:22:28:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:28:INFO] Sniff delimiter as ',' [2021-02-17:19:22:28:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:30:INFO] Sniff delimiter as ',' [2021-02-17:19:22:30:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:30:INFO] Sniff delimiter as ',' [2021-02-17:19:22:30:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:30:INFO] Sniff delimiter as ',' [2021-02-17:19:22:30:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:30:INFO] Sniff delimiter as ',' [2021-02-17:19:22:30:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:30:INFO] Sniff delimiter as ',' [2021-02-17:19:22:30:INFO] Sniff delimiter as ',' [2021-02-17:19:22:30:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:31:INFO] Sniff delimiter as ',' [2021-02-17:19:22:31:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:30:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:31:INFO] Sniff delimiter as ',' [2021-02-17:19:22:31:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:32:INFO] Sniff delimiter as ',' [2021-02-17:19:22:32:INFO] Sniff delimiter as ',' [2021-02-17:19:22:32:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:32:INFO] Sniff delimiter as ',' [2021-02-17:19:22:32:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:32:INFO] Sniff delimiter as ',' [2021-02-17:19:22:32:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:33:INFO] Sniff delimiter as ',' [2021-02-17:19:22:33:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:32:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:32:INFO] Sniff delimiter as ',' [2021-02-17:19:22:32:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:32:INFO] Sniff delimiter as ',' [2021-02-17:19:22:32:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:33:INFO] Sniff delimiter as ',' [2021-02-17:19:22:33:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:35:INFO] Sniff delimiter as ',' [2021-02-17:19:22:35:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:35:INFO] Sniff delimiter as ',' [2021-02-17:19:22:35:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:35:INFO] Sniff delimiter as ',' [2021-02-17:19:22:35:INFO] Determined delimiter of CSV input is ',' [2021-02-17:19:22:35:INFO] Sniff delimiter as ',' [2021-02-17:19:22:35:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/365.8 KiB (3.7 MiB/s) with 1 file(s) remaining Completed 365.8 KiB/365.8 KiB (5.2 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-central-1-941012658317/xgboost-2021-02-17-19-17-14-787/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_X = None test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = 'new-xgb-endpoint-config-' +strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": xgb_transformer.model_name, "VariantName": "AllTraffic" }] ) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ---------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. ###Code # Make sure that we use SageMaker 1.x !pip install sagemaker==1.72.0 ###Output _____no_output_____ ###Markdown Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import os # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None data = None labels = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) test_location val_location train_location ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2022-02-07 08:08:27 Starting - Starting the training job........................... 2022-02-07 08:12:39 Starting - Launched instance was unhealthy, replacing it!... 2022-02-07 08:13:03 Starting - Preparing the instances for training...... 2022-02-07 08:14:10 Downloading - Downloading input data... 2022-02-07 08:14:36 Training - Downloading the training image... 2022-02-07 08:15:16 Training - Training image download completed. Training in progress.Arguments: train [2022-02-07:08:15:19:INFO] Running standalone xgboost training. [2022-02-07:08:15:19:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8510.67mb [2022-02-07:08:15:19:INFO] Determined delimiter of CSV input is ',' [08:15:19] S3DistributionType set as FullyReplicated [08:15:22] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2022-02-07:08:15:22:INFO] Determined delimiter of CSV input is ',' [08:15:22] S3DistributionType set as FullyReplicated [08:15:23] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [08:15:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 0 pruned nodes, max_depth=5 [0]#011train-error:0.296667#011validation-error:0.3032 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping. Will train until validation-error hasn't improved in 10 rounds. [08:15:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5 [1]#011train-error:0.281#011validation-error:0.2858 [08:15:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.277867#011validation-error:0.2831 [08:15:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.275933#011validation-error:0.2823 [08:15:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [4]#011train-error:0.269333#011validation-error:0.2795 [08:15:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 0 pruned nodes, max_depth=5 [5]#011train-error:0.257667#011validation-error:0.2668 [08:15:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [6]#011train-error:0.2462#011validation-error:0.2608 [08:15:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [7]#011train-error:0.2394#011validation-error:0.2539 [08:15:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [8]#011train-error:0.2304#011validation-error:0.2469 [08:15:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [9]#011train-error:0.2244#011validation-error:0.24 [08:15:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [10]#011train-error:0.220933#011validation-error:0.2371 [08:15:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.214667#011validation-error:0.2323 [08:15:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [12]#011train-error:0.2086#011validation-error:0.2247 [08:15:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [13]#011train-error:0.206267#011validation-error:0.2256 [08:15:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [14]#011train-error:0.204933#011validation-error:0.2242 [08:15:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [15]#011train-error:0.201333#011validation-error:0.2205 [08:16:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.1964#011validation-error:0.2165 [08:16:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [17]#011train-error:0.1954#011validation-error:0.2139 [08:16:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [18]#011train-error:0.1922#011validation-error:0.2119 [08:16:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.189667#011validation-error:0.2082 [08:16:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [20]#011train-error:0.185267#011validation-error:0.2058 [08:16:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.183#011validation-error:0.2022 [08:16:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [22]#011train-error:0.1816#011validation-error:0.2008 [08:16:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.1772#011validation-error:0.1998 [08:16:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [24]#011train-error:0.1748#011validation-error:0.1996 [08:16:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.1726#011validation-error:0.1981 [08:16:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [26]#011train-error:0.170533#011validation-error:0.1965 [08:16:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [27]#011train-error:0.1682#011validation-error:0.1947 [08:16:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [28]#011train-error:0.166533#011validation-error:0.1934 [08:16:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [29]#011train-error:0.164333#011validation-error:0.1914 [08:16:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [30]#011train-error:0.162133#011validation-error:0.1902 [08:16:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.161667#011validation-error:0.1888 [08:16:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.159533#011validation-error:0.1877 [08:16:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 14 pruned nodes, max_depth=5 [33]#011train-error:0.1582#011validation-error:0.187 [08:16:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 16 pruned nodes, max_depth=5 [34]#011train-error:0.1566#011validation-error:0.187 [08:16:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [35]#011train-error:0.1544#011validation-error:0.1856 [08:16:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [36]#011train-error:0.153467#011validation-error:0.1844 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .....................................Arguments: serve [2022-02-07 08:28:59 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2022-02-07 08:28:59 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2022-02-07 08:28:59 +0000] [1] [INFO] Using worker: gevent [2022-02-07 08:28:59 +0000] [22] [INFO] Booting worker with pid: 22 [2022-02-07 08:28:59 +0000] [23] [INFO] Booting worker with pid: 23 [2022-02-07 08:28:59 +0000] [24] [INFO] Booting worker with pid: 24 [2022-02-07 08:28:59 +0000] [25] [INFO] Booting worker with pid: 25 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2022-02-07:08:28:59:INFO] Model loaded successfully for worker : 22 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2022-02-07:08:28:59:INFO] Model loaded successfully for worker : 24 [2022-02-07:08:28:59:INFO] Model loaded successfully for worker : 23 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2022-02-07:08:29:00:INFO] Model loaded successfully for worker : 25 [2022-02-07:08:29:06:INFO] Sniff delimiter as ',' [2022-02-07:08:29:06:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:06:INFO] Sniff delimiter as ',' [2022-02-07:08:29:06:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:06:INFO] Sniff delimiter as ',' [2022-02-07:08:29:06:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:06:INFO] Sniff delimiter as ',' [2022-02-07:08:29:06:INFO] Determined delimiter of CSV input is ',' 2022-02-07T08:29:03.884:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2022-02-07:08:29:07:INFO] Sniff delimiter as ',' [2022-02-07:08:29:07:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:07:INFO] Sniff delimiter as ',' [2022-02-07:08:29:07:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:07:INFO] Sniff delimiter as ',' [2022-02-07:08:29:07:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:07:INFO] Sniff delimiter as ',' [2022-02-07:08:29:07:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:10:INFO] Sniff delimiter as ',' [2022-02-07:08:29:10:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:10:INFO] Sniff delimiter as ',' [2022-02-07:08:29:10:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:10:INFO] Sniff delimiter as ',' [2022-02-07:08:29:10:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:10:INFO] Sniff delimiter as ',' [2022-02-07:08:29:10:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:10:INFO] Sniff delimiter as ',' [2022-02-07:08:29:10:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:11:INFO] Sniff delimiter as ',' [2022-02-07:08:29:10:INFO] Sniff delimiter as ',' [2022-02-07:08:29:10:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:11:INFO] Sniff delimiter as ',' [2022-02-07:08:29:11:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:11:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:14:INFO] Sniff delimiter as ',' [2022-02-07:08:29:14:INFO] Sniff delimiter as ',' [2022-02-07:08:29:14:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:14:INFO] Sniff delimiter as ',' [2022-02-07:08:29:14:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:14:INFO] Sniff delimiter as ',' [2022-02-07:08:29:14:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:14:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:14:INFO] Sniff delimiter as ',' [2022-02-07:08:29:14:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:14:INFO] Sniff delimiter as ',' [2022-02-07:08:29:14:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:15:INFO] Sniff delimiter as ',' [2022-02-07:08:29:15:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:15:INFO] Sniff delimiter as ',' [2022-02-07:08:29:15:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:18:INFO] Sniff delimiter as ',' [2022-02-07:08:29:18:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:18:INFO] Sniff delimiter as ',' [2022-02-07:08:29:18:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:18:INFO] Sniff delimiter as ',' [2022-02-07:08:29:18:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:18:INFO] Sniff delimiter as ',' [2022-02-07:08:29:18:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:18:INFO] Sniff delimiter as ',' [2022-02-07:08:29:18:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:18:INFO] Sniff delimiter as ',' [2022-02-07:08:29:18:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:22:INFO] Sniff delimiter as ',' [2022-02-07:08:29:22:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:22:INFO] Sniff delimiter as ',' [2022-02-07:08:29:22:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:22:INFO] Sniff delimiter as ',' [2022-02-07:08:29:22:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:22:INFO] Sniff delimiter as ',' [2022-02-07:08:29:22:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:22:INFO] Sniff delimiter as ',' [2022-02-07:08:29:22:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:22:INFO] Sniff delimiter as ',' [2022-02-07:08:29:22:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:22:INFO] Sniff delimiter as ',' [2022-02-07:08:29:22:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:29:22:INFO] Sniff delimiter as ',' [2022-02-07:08:29:22:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/473.3 KiB (2.3 MiB/s) with 1 file(s) remaining Completed 473.3 KiB/473.3 KiB (4.1 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-801008216402/xgboost-2022-02-07-08-22-55-311/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() len(new_X) new_X[4] len(new_Y) ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code vocabulary # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary,preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.fit_transform(new_X).toarray() vectorizer ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code prefix # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_data_location ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ................................Arguments: serve [2022-02-07 08:36:37 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2022-02-07 08:36:37 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2022-02-07 08:36:37 +0000] [1] [INFO] Using worker: gevent [2022-02-07 08:36:37 +0000] [21] [INFO] Booting worker with pid: 21 [2022-02-07 08:36:37 +0000] [22] [INFO] Booting worker with pid: 22 [2022-02-07 08:36:37 +0000] [23] [INFO] Booting worker with pid: 23 [2022-02-07 08:36:37 +0000] [24] [INFO] Booting worker with pid: 24 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) Arguments: serve [2022-02-07 08:36:37 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2022-02-07 08:36:37 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2022-02-07 08:36:37 +0000] [1] [INFO] Using worker: gevent [2022-02-07 08:36:37 +0000] [21] [INFO] Booting worker with pid: 21 [2022-02-07 08:36:37 +0000] [22] [INFO] Booting worker with pid: 22 [2022-02-07 08:36:37 +0000] [23] [INFO] Booting worker with pid: 23 [2022-02-07 08:36:37 +0000] [24] [INFO] Booting worker with pid: 24 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2022-02-07:08:36:37:INFO] Model loaded successfully for worker : 21 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2022-02-07:08:36:37:INFO] Model loaded successfully for worker : 22 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2022-02-07:08:36:37:INFO] Model loaded successfully for worker : 23 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2022-02-07:08:36:37:INFO] Model loaded successfully for worker : 24 [2022-02-07:08:36:37:INFO] Model loaded successfully for worker : 21 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2022-02-07:08:36:37:INFO] Model loaded successfully for worker : 22 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2022-02-07:08:36:37:INFO] Model loaded successfully for worker : 23 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2022-02-07:08:36:37:INFO] Model loaded successfully for worker : 24 2022-02-07T08:36:41.241:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2022-02-07:08:36:44:INFO] Sniff delimiter as ',' [2022-02-07:08:36:44:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:44:INFO] Sniff delimiter as ',' [2022-02-07:08:36:44:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:44:INFO] Sniff delimiter as ',' [2022-02-07:08:36:44:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:44:INFO] Sniff delimiter as ',' [2022-02-07:08:36:44:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:44:INFO] Sniff delimiter as ',' [2022-02-07:08:36:44:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:44:INFO] Sniff delimiter as ',' [2022-02-07:08:36:44:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:44:INFO] Sniff delimiter as ',' [2022-02-07:08:36:44:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:44:INFO] Sniff delimiter as ',' [2022-02-07:08:36:44:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:47:INFO] Sniff delimiter as ',' [2022-02-07:08:36:47:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:47:INFO] Sniff delimiter as ',' [2022-02-07:08:36:47:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:48:INFO] Sniff delimiter as ',' [2022-02-07:08:36:48:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:48:INFO] Sniff delimiter as ',' [2022-02-07:08:36:48:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:48:INFO] Sniff delimiter as ',' [2022-02-07:08:36:48:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:48:INFO] Sniff delimiter as ',' [2022-02-07:08:36:48:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:48:INFO] Sniff delimiter as ',' [2022-02-07:08:36:48:INFO] Determined delimiter of CSV input is ',' [2022-02-07:08:36:48:INFO] Sniff delimiter as ',' [2022-02-07:08:36:48:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-801008216402/xgboost-2022-02-07-08-31-29-110/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2022-02-07-08-08-27-527 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['got', 'video', 'use', 'watch', 'last', 'night', 'act', 'start', 'extrem', 'bad', 'hey', 'hey', 'twister', 'got', 'good', 'soon', 'ward', 'tornado', 'look', 'extrem', 'fake', 'mani', 'cgi', 'effect', 'dodgi', 'scene', 'hous', 'crack', 'apart', 'content', 'insid', 'blown', 'around', 'suck', 'extrem', 'well', 'done', 'par', 'movi', 'like', 'twister', 'scene', 'devast', 'also', 'extrem', 'well', 'done', 'stori', 'well', 'written', 'refresh', 'see', 'movi', 'like', 'stray', 'away', 'old', 'disast', 'formula', 'movi', 'genr', 'seem', 'stuck', '30', 'year', 'movi', 'weird', 'mix', 'fx', 'act', 'qualiti', 'merit', 'book', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:489: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'weari', 'victorian', 'reincarn', 'playboy', 'ghetto', '21st', 'spill'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'orchestr', 'optimist', 'omin', 'dubiou', 'banana', 'sophi', 'masterson'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. ###Code pd.DataFrame(data=new_XV,columns = new_vectorizer.get_feature_names()).sum(axis=0)[new_vocabulary - original_vocabulary] ###Output _____no_output_____ ###Markdown (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) new_xgb.output_path ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2022-02-07 08:48:44 Starting - Starting the training job... 2022-02-07 08:48:45 Starting - Launching requested ML instances...... 2022-02-07 08:50:06 Starting - Preparing the instances for training... ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() new_xgb_transformer.output_path ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-801008216402/xgboost-2022-02-07-09-13-13-675/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() new_xgb_transformer.output_path xgb_predictor.endpoint !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir data_dir = '../data/sentiment_update' predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = 'new-xgb-endpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [ { # First we include the linear model "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. # update_endpoint new_endpoint_info = session.sagemaker_client.update_endpoint( EndpointName = xgb_predictor.endpoint, EndpointConfigName = new_xgb_endpoint_config_name) xgb_predictor.endpoint ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code new_endpoint_dec = session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ---------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm -r $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm -r $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2019-07-28 22:49:44-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 24.4MB/s in 4.7s 2019-07-28 22:49:49 (17.1 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Wrote preprocessed data to cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2019-07-29 10:19:24 Starting - Starting the training job... 2019-07-29 10:19:26 Starting - Launching requested ML instances...... 2019-07-29 10:20:26 Starting - Preparing the instances for training... 2019-07-29 10:21:20 Downloading - Downloading input data... 2019-07-29 10:21:40 Training - Downloading the training image.. Arguments: train [2019-07-29:10:21:59:INFO] Running standalone xgboost training. [2019-07-29:10:21:59:INFO] File size need to be processed in the node: 9.14mb. Available memory size in the node: 8446.86mb [2019-07-29:10:21:59:INFO] Determined delimiter of CSV input is ',' [10:21:59] S3DistributionType set as FullyReplicated [10:21:59] 15000x178 matrix with 2670000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2019-07-29:10:21:59:INFO] Determined delimiter of CSV input is ',' [10:21:59] S3DistributionType set as FullyReplicated [10:21:59] 10000x178 matrix with 1780000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [10:21:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.396667#011validation-error:0.4226 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [10:21:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 2 pruned nodes, max_depth=5 [1]#011train-error:0.383333#011validation-error:0.4013 [10:21:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.374#011validation-error:0.3943 [10:21:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 2 pruned nodes, max_depth=5 [3]#011train-error:0.367133#011validation-error:0.3896 [10:21:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.361867#011validation-error:0.3851 [10:21:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 2 pruned nodes, max_depth=5 [5]#011train-error:0.357667#011validation-error:0.3839 [10:21:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 0 pruned nodes, max_depth=5 [6]#011train-error:0.348#011validation-error:0.3788 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.3418#011validation-error:0.3766 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 56 extra nodes, 4 pruned nodes, max_depth=5 [8]#011train-error:0.3368#011validation-error:0.3774 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 2 pruned nodes, max_depth=5 [9]#011train-error:0.3332#011validation-error:0.3756 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [10]#011train-error:0.328133#011validation-error:0.3725 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 2 pruned nodes, max_depth=5 [11]#011train-error:0.3232#011validation-error:0.3712 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.3204#011validation-error:0.3703 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.318933#011validation-error:0.3703 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 2 pruned nodes, max_depth=5 [14]#011train-error:0.313733#011validation-error:0.3657 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [15]#011train-error:0.3124#011validation-error:0.3641 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5 [16]#011train-error:0.308867#011validation-error:0.3628 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [17]#011train-error:0.304533#011validation-error:0.359 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 56 extra nodes, 2 pruned nodes, max_depth=5 [18]#011train-error:0.301533#011validation-error:0.3587 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 0 pruned nodes, max_depth=5 [19]#011train-error:0.2952#011validation-error:0.356 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 2 pruned nodes, max_depth=5 [20]#011train-error:0.292#011validation-error:0.357 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.291133#011validation-error:0.3558 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.288333#011validation-error:0.3546 [10:22:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.285667#011validation-error:0.3534 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 4 pruned nodes, max_depth=5 [24]#011train-error:0.281133#011validation-error:0.3521 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.280133#011validation-error:0.3534 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [26]#011train-error:0.278933#011validation-error:0.3536 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 56 extra nodes, 2 pruned nodes, max_depth=5 [27]#011train-error:0.2786#011validation-error:0.3514 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.2774#011validation-error:0.3516 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [29]#011train-error:0.277067#011validation-error:0.3508 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [30]#011train-error:0.273533#011validation-error:0.35 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [31]#011train-error:0.2734#011validation-error:0.35 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.2722#011validation-error:0.3506 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 8 pruned nodes, max_depth=5 [33]#011train-error:0.271#011validation-error:0.3518 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [34]#011train-error:0.2684#011validation-error:0.3516 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [35]#011train-error:0.2654#011validation-error:0.3491 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [36]#011train-error:0.2628#011validation-error:0.348 [10:22:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [37]#011train-error:0.262333#011validation-error:0.3489 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 6 pruned nodes, max_depth=5 [38]#011train-error:0.259533#011validation-error:0.3475 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 2 pruned nodes, max_depth=5 [39]#011train-error:0.2554#011validation-error:0.3486 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5 [40]#011train-error:0.255467#011validation-error:0.3476 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [41]#011train-error:0.254667#011validation-error:0.3483 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [42]#011train-error:0.253867#011validation-error:0.3486 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 16 pruned nodes, max_depth=5 [43]#011train-error:0.253533#011validation-error:0.3489 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 0 pruned nodes, max_depth=5 [44]#011train-error:0.2534#011validation-error:0.3482 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [45]#011train-error:0.253733#011validation-error:0.3482 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [46]#011train-error:0.251#011validation-error:0.3476 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [47]#011train-error:0.250333#011validation-error:0.346 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [48]#011train-error:0.2482#011validation-error:0.3467 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 4 pruned nodes, max_depth=5 [49]#011train-error:0.2468#011validation-error:0.3468 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 4 pruned nodes, max_depth=5 [50]#011train-error:0.244933#011validation-error:0.3464 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [51]#011train-error:0.243067#011validation-error:0.3463 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [52]#011train-error:0.240867#011validation-error:0.3453 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 0 pruned nodes, max_depth=5 [53]#011train-error:0.239333#011validation-error:0.3448 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [54]#011train-error:0.24#011validation-error:0.3461 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [55]#011train-error:0.239#011validation-error:0.3463 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [56]#011train-error:0.237133#011validation-error:0.3445 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [57]#011train-error:0.2364#011validation-error:0.3428 [10:22:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [58]#011train-error:0.2346#011validation-error:0.3436 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5 [59]#011train-error:0.234267#011validation-error:0.344 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [60]#011train-error:0.234#011validation-error:0.3443 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 8 pruned nodes, max_depth=5 [61]#011train-error:0.234533#011validation-error:0.3438 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 2 pruned nodes, max_depth=5 [62]#011train-error:0.231933#011validation-error:0.3427 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 6 pruned nodes, max_depth=5 [63]#011train-error:0.231133#011validation-error:0.3439 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 2 pruned nodes, max_depth=5 [64]#011train-error:0.227933#011validation-error:0.3439 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [65]#011train-error:0.2272#011validation-error:0.3435 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 10 pruned nodes, max_depth=5 [66]#011train-error:0.227267#011validation-error:0.3445 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [67]#011train-error:0.225333#011validation-error:0.3439 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [68]#011train-error:0.2236#011validation-error:0.3414 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [69]#011train-error:0.223267#011validation-error:0.3415 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [70]#011train-error:0.222#011validation-error:0.342 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [71]#011train-error:0.221#011validation-error:0.3411 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 2 pruned nodes, max_depth=5 [72]#011train-error:0.217333#011validation-error:0.3415 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [73]#011train-error:0.216667#011validation-error:0.3408 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [74]#011train-error:0.216667#011validation-error:0.3393 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=4 [75]#011train-error:0.217067#011validation-error:0.3388 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [76]#011train-error:0.214533#011validation-error:0.3404 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [77]#011train-error:0.214067#011validation-error:0.341 [10:22:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [78]#011train-error:0.211667#011validation-error:0.3411 [10:22:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [79]#011train-error:0.209933#011validation-error:0.3402 [10:22:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 10 pruned nodes, max_depth=5 [80]#011train-error:0.209267#011validation-error:0.3396 [10:22:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [81]#011train-error:0.21#011validation-error:0.3411 [10:22:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [82]#011train-error:0.208533#011validation-error:0.3405 [10:22:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 18 pruned nodes, max_depth=5 [83]#011train-error:0.208267#011validation-error:0.3412 [10:22:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 6 pruned nodes, max_depth=5 [84]#011train-error:0.206067#011validation-error:0.3422 [10:22:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 0 pruned nodes, max_depth=5 [85]#011train-error:0.2052#011validation-error:0.3399 Stopping. Best iteration: [75]#011train-error:0.217067#011validation-error:0.3388  ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ..........................................! ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/364.0 KiB (2.8 MiB/s) with 1 file(s) remaining Completed 364.0 KiB/364.0 KiB (3.8 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-129722534204/xgboost-2019-07-29-10-22-36-992/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ........................................! ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-2-129722534204/xgboost-2019-07-29-10-26-16-974/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Using already existing model: xgboost-2019-07-29-10-19-24-563 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['film', 'superb', 'job', 'depict', 'plight', 'al', 'lou', 'gehrig', 'diseas', 'suffer', 'subject', 'done', 'compass', 'well', 'humor', 'helena', 'bonham', 'carter', 'convinc', 'person', 'al', 'found', 'hard', 'believ', 'act', 'kenneth', 'branagh', 'superb', 'actor', 'live', 'expect', 'quirki', 'artist', 'misbehav', 'forc', 'provid', 'companionship', 'helena', 'charact', 'part', 'commun', 'servic', 'altern', 'prison', 'time', 'watch', 'develop', 'relationship', 'two', 'treat', 'begin', 'end', 'tha', 'fact', 'fairi', 'tale', 'detract', 'fabul', 'perform', 'one', 'come', 'care', 'deepli', 'two', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'\x8d', 'Õ', 'ò', '\x85', '@', 'o', 'À', '¢', 'P', '\x91', '–', '~', '*', '-', 'Á', 'ð', '=', 'm', 'G', 's', 'é', '”', 'æ', 'Ê', 'ë', 'ō', 'ú', 'E', '\xad', 'Ä', '_', '\t', 'ç', '\uf0b7', 'W', '$', 'S', '¨', '<', '¿', 'ü', 'i', ' ', '³', '\x10', ',', '¤', 'Y', 'B', 'R', '>', '“', 'ê', 'ô', ')', '´', "'", '?', '¡', 'í', '\x95', 'ñ', '\x08', 'N', 'D', '¾', '·', '\x9e', '!', 'ß', 'ù', '%', 'ï', 'à', '’', 'T', '®', 'Ü', 'ö', 'ý', '(', '&', '"', 'Z', '^', '\x8e', 'å', 'î', '/', '\x96', 't', '½', '+', 'á', '\xa0', '}', ';', '\x9a', 'y', ']', 'U', 'ó', ':', '\x97', '\x84', 'ì', '£', '{', '|', 'è', 'Å', '¦', '.', 'V', '#', 'ä', '…', '§', 'É', '\\', 'J', '\x80', 'I', '»', 'ã', 'Ã', '[', 'X', 'È', 'A', '‘', '°', 'H', 'K', 'F', 'a', 'd', 'Q', 'û', 'O', '«', 'â', '₤', 'Ø', '`', 'º', 'M', 'ø', 'L', 'C'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'demon', 'thank', 'deem', 'genet', 'boyl', 'trite', 'second', 'winner', 'lee', 'gari', 'vocal', 'express', 'hardli', 'relax', 'sugar', '1969', 'goofi', 'kevin', 'provoc', 'el', 'heroin', 'dane', 'companion', 'nina', 'timon', 'matthau', 'local', 'shall', 'fond', 'bibl', 'space', 'file', 'lame', 'overcom', 'filmmak', 'uninterest', 'disappear', 'princ', 'caus', 'pin', 'revolut', 'forti', 'thru', 'dialogu', 'comfort', 'fun', 'overli', 'dame', 'flash', 'dirti', 'path', 'leader', 'shed', 'detract', 'heavi', 'tax', 'first', 'stranger', 'entri', 'websit', 'armstrong', 'formula', 'mason', 'deaf', 'suitabl', '1978', 'vs', 'shown', 'linear', 'garner', 'draw', 'mirror', 'mall', 'distant', 'superior', 'left', 'secondari', 'wield', 'crown', 'hooker', 'maid', 'sexual', 'honesti', 'hypnot', 'ambiti', 'imag', 'joey', 'made', 'thrown', 'betray', 'devast', 'hire', 'fabul', 'dalton', '2005', 'crucial', 'knew', 'role', 'franc', 'debut', 'fascist', 'eleven', 'craven', 'guitar', 'misfortun', 'darn', 'canada', 'glori', 'babi', 'interact', 'storytel', 'amazingli', 'eddi', 'bump', 'good', 'across', 'delv', '1993', 'antonioni', 'iron', 'ruin', 'messag', 'adapt', 'baddi', 'diamond', 'amaz', 'prop', 'blame', 'induc', 'bacon', 'linger', 'broadway', 'lean', 'activ', 'substitut', 'distort', 'section', 'lawyer', 'physic', 'shock', 'goof', 'storm', 'cyborg', 'prime', 'buff', 'harrison', 'glamor', 'asylum', 'insur', 'notic', 'sweat', 'fli', 'skip', 'librari', 'limit', 'impos', 'wash', 'atroc', 'access', 'pound', 'simplic', 'make', 'piti', 'breakfast', 'barn', 'choppi', 'regret', 'southern', 'latest', 'regist', 'routin', 'embark', 'motion', 'ador', 'compromis', 'wardrob', 'succeed', 'obvious', 'pretti', 'loi', 'nearli', 'immigr', 'basic', 'mundan', 'skeptic', 'devot', 'friendli', 'found', 'heroic', 'marin', '1972', 'satisfi', 'appli', 'tear', 'femm', 'shape', 'popul', 'anti', 'elm', 'elimin', 'entitl', 'pseudo', 'standout', 'slowli', 'inmat', 'hollywood', 'tradit', 'hug', 'strang', 'eastwood', 'rent', 'sister', 'oddli', 'car', 'player', 'pauli', 'shi', 'farrel', 'moral', 'overlook', 'abil', 'pari', 'lion', 'africa', 'tail', 'buri', 'year', 'downright', 'quickli', 'carrey', 'wannab', 'villain', 'puppet', 'crocodil', 'knowledg', 'simplist', 'expand', '28', 'colleagu', 'widmark', 'homeless', 'stewart', 'turn', 'dick', 'choke', 'uncov', 'bravo', 'factor', 'resort', 'lie', 'gratuit', 'sparkl', 'unexpect', 'bull', 'ape', 'henc', 'grin', 'luckili', 'dictat', 'doll', 'racial', 'feat', 'hard', 'temper', 'speech', 'bat', 'behind', 'jame', 'furi', 'loos', 'redneck', 'cole', 'wonder', 'chanc', 'weather', 'censor', 'emot', 'languag', 'warrant', 'oppon', 'jerk', 'cheek', 'marlon', 'reign', 'corni', 'seal', 'unfold', 'climat', 'choreograph', '17', 'inclus', 'potenti', 'furthermor', 'troubl', 'jami', 'chamberlain', 'credit', 'whine', 'bloom', 'glover', 'historian', 'inspir', 'kept', 'polic', 'mgm', 'twin', 'christin', 'sex', 'pink', 'bounc', 'empir', 'forc', 'moon', 'attend', 'exhibit', 'intric', 'whatev', 'almost', 'elvi', 'rhythm', 'mous', 'wive', 'miser', 'flower', '3000', 'territori', 'meaning', 'bela', 'import', 'report', 'market', 'bob', 'virtual', 'denni', 'bo', 'seem', 'exclus', 'spite', 'union', 'welcom', 'level', 'reput', 'demand', 'bend', 'rat', 'hilar', 'raj', 'earnest', 'terror', 'progress', 'chill', 'hall', 'remind', 'audienc', 'dawn', 'neatli', 'within', 'cash', 'safeti', 'total', 'prepar', 'laurenc', 'setup', 'tire', 'splendid', 'fiction', 'equal', 'murray', 'revolutionari', 'chase', 'sharon', 'yellow', 'uwe', 'colin', 'sensibl', 'ratso', 'paint', 'key', 'club', 'particip', 'wax', 'piec', 'dynam', 'eyr', 'occup', 'jewish', 'tierney', 'cloud', 'bay', 'taught', 'chose', 'nerd', 'particular', 'sound', 'jason', 'clan', 'expos', 'roy', 'recal', 'meat', 'led', 'anni', 'constant', 'mistaken', 'shatter', 'begin', 'shoot', 'pierc', 'poetri', 'proce', 'excus', 'hole', 'afterward', 'eleg', 'champion', 'contest', 'btw', 'diana', 'aim', 'neat', 'burt', 'russia', 'nolt', 'vomit', 'inconsist', 'youth', 'craig', 'machin', 'blob', 'psycholog', 'buffalo', 'commentari', 'bizarr', 'musician', 'run', 'sunni', 'shootout', 'fuller', 'dust', '3rd', 'fought', 'danc', 'materi', 'think', 'scope', 'offici', 'gay', 'error', 'exploit', 'facial', 'nonsens', 'fianc', 'philosophi', 'signific', 'outlaw', 'news', 'shaki', 'say', 'cruis', 'illog', 'imposs', 'petti', 'much', 'wretch', 'cycl', 'rope', 'nut', 'rex', 'self', 'undead', 'clip', 'washington', 'paranoia', 'perform', 'muddl', 'fade', 'plant', 'fbi', 'mel', 'rise', 'noir', 'acquir', 'cemeteri', 'sequenc', 'patienc', 'sappi', 'drown', 'return', 'pour', 'condemn', 'firmli', 'lanc', 'rot', 'handicap', 'plod', 'beard', 'exampl', 'drawn', 'angl', 'crazi', 'taken', 'superb', 'walken', 'employe', 'up', 'slice', 'reynold', 'luxuri', 'move', 'parad', 'sand', '1985', 'cave', 'done', 'dramat', 'sicken', 'illeg', 'defin', 'stinker', 'rescu', 'warn', 'display', 'pray', 'subtli', 'dire', 'red', 'laura', 'latter', 'night', 'partial', 'butler', 'upper', 'marti', 'borrow', 'scari', 'beat', 'discoveri', 'gori', 'bin', 'hook', 'buzz', 'undertak', 'japan', 'behavior', 'big', 'hardi', 'poem', 'featur', 'walk', 'dan', 'budget', 'internet', 'collect', 'hous', 'box', 'mexico', 'shoulder', 'stunt', 'danish', 'broad', 'sidney', 'decid', 'realist', 'pair', 'exposit', 'load', 'trial', 'predict', 'nine', 'island', 'develop', 'practic', 'counter', 'vagu', 'bought', 'wild', 'corbett', 'lucil', '1939', 'particularli', 'underli', 'lo', 'lloyd', 'del', 'camp', 'character', 'duck', 'violenc', 'talk', '1995', 'thumb', 'grade', 'immedi', 'inaccuraci', 'gimmick', 'logan', 'cook', 'instinct', 'hal', 'dismiss', 'fest', 'creek', 'isol', 'wisdom', 'surf', 'torment', 'weekend', 'zero', 'mexican', 'tag', 'precis', 'walt', 'brought', 'amateurish', 'point', 'dedic', 'charisma', 'lit', 'mislead', 'masterson', 'cruelti', 'bori', 'gloriou', 'rehears', 'atlanti', 'raymond', 'money', 'abus', 'rear', 'nude', 'facil', 'insan', 'amanda', 'stage', 'repeat', 'admittedli', 'harmless', 'idol', 'fx', 'six', 'editor', 'coup', 'usa', 'style', 'scratch', 'deliveri', 'represent', 'proclaim', 'dinosaur', 'bate', 'chip', 'hell', 'sassi', 'bewar', 'construct', 'special', 'commerci', 'restrict', 'attitud', 'loretta', 'uplift', 'shoe', 'tunnel', 'gritti', 'tongu', 'corpor', 'consider', 'actual', 'histor', 'jungl', 'church', 'matur', 'despair', 'mildli', 'alexand', 'respond', 'kate', 'rocket', 'india', 'outcom', 'undoubtedli', 'weaker', 'luka', 'id', 'lisa', 'dig', 'einstein', 'garbag', 'swedish', 'copi', 'drew', 'encourag', 'traci', 'profit', 'wwe', 'footbal', 'true', 'pride', 'outsid', 'martin', 'goer', 'take', 'sensual', 'doubt', 'suck', 'recov', 'head', 'camcord', 'exquisit', 'amus', 'er', 'foreign', 'tripl', 'crystal', 'understand', 'conveni', 'cari', 'flat', 'becom', 'nowher', 'charismat', 'shop', 'favorit', 'wander', 'delet', 'simpson', 'convincingli', 'fame', 'servant', 'gambl', 'stiller', 'theatr', 'nervou', 'would', 'season', 'propheci', 'distinguish', 'pg', 'vh', 'watson', 'laid', 'von', 'forbidden', 'godfath', 'jewel', 'characterist', 'inherit', 'minut', 'bath', 'larger', 'inform', 'sadist', 'partner', 'gross', 'pc', 'diseas', 'pacif', 'rosario', 'tactic', '1986', 'odd', 'weird', 'brosnan', 'relat', 'certain', 'occas', 'conrad', 'giant', 'opportun', 'greatest', 'graham', 'ray', 'underneath', 'separ', 'legendari', 'bullet', 'bride', 'lesser', 'cream', 'fate', 'explor', 'declar', 'boredom', 'sell', 'bondag', 'comprehend', 'shine', 'aw', 'drain', 'comprehens', 'regular', 'audrey', 'oz', 'divorc', 'awkward', 'afternoon', 'guarante', 'protagonist', 'serial', 'short', 'put', 'demonstr', 'william', 'pitt', 'manipul', 'etern', 'statu', 'outer', 'kumar', 'grandmoth', 'jeffrey', 'visitor', 'domest', 'dont', 'complaint', 'abund', 'margin', 'reson', 'mighti', 'evolut', 'alicia', 'heston', 'hide', 'organ', 'gasp', 'beverli', 'verg', 'roman', 'enchant', 'bach', 'content', 'home', 'donna', 'emperor', 'nun', 'carl', 'valentin', 'toe', 'sentiment', 'length', 'tediou', 'pressur', 'juvenil', 'viru', 'lower', 'spike', 'thompson', 'collabor', 'mess', 'macabr', 'form', 'contradict', 'took', 'con', 'forget', 'angel', 'strain', 'demis', 'romero', 'argument', 'gotta', 'dixon', 'celebr', 'crowd', 'involv', 'godzilla', 'portrait', 'past', 'perfect', 'horrifi', 'mouth', 'spiritu', 'alli', 'work', 'ebert', 'ever', 'hawk', 'vice', 'cox', 'bug', 'bu', 'father', 'costum', 'music', 'tonight', 'shaw', 'exot', 'matthew', 'releas', 'macho', 'noth', 'pleasur', 'pickford', 'characteris', 'restor', 'card', 'fire', 'reid', 'genr', '12', 'succe', 'mere', 'underr', 'soup', 'god', 'overal', 'newspap', 'earl', 'ustinov', 'fido', 'case', 'arthur', 'ram', 'reunion', 'scientist', 'suit', 'cliff', 'miami', 'alongsid', 'chew', 'muscl', 'heel', 'complet', 'nazi', 'slap', 'repris', 'mankind', 'addict', 'term', 'dwarf', 'dare', 'prevent', 'ken', 'worthless', 'soldier', 'paradis', 'american', 'ingredi', 'moe', 'shirt', 'two', 'biggest', 'shorter', 'impress', 'go', 'improv', 'eli', 'sandler', 'wait', 'nick', 'power', 'clown', 'wacki', 'lincoln', 'carter', 'artwork', 'profession', 'hum', 'posit', 'photo', 'painter', 'useless', 'flesh', 'psychot', 'opinion', 'british', 'nolan', 'basing', 'creat', 'predat', 'silent', 'multipl', 'fundament', 'sub', 'strongest', 'start', 'see', 'butcher', 'peck', 'europa', 'deliv', 'other', 'mini', 'firstli', 'unsuspect', 'halloween', 'busi', 'saint', 'crow', 'front', 'skill', 'lead', 'knight', 'deep', 'spin', 'kong', 'vein', 'innov', 'blow', 'grab', 'kidnap', 'america', 'boast', 'visibl', 'third', 'smooth', 'former', 'blind', 'runner', 'magnific', 'everyon', 'myer', 'finest', 'written', 'voyag', 'yet', 'lena', 'alison', 'kubrick', 'launch', 'remov', 'horn', 'disregard', 'near', 'maci', 'snow', 'creator', 'stole', 'frame', 'conneri', 'retir', 'uniform', 'oliv', 'unsatisfi', 'comparison', 'rambl', 'departur', 'rob', 'fighter', 'eugen', 'implic', 'lurk', 'punish', 'feminist', 'hundr', 'stop', 'trek', 'numb', 'comment', 'polici', 'radiat', 'shirley', 'pointless', 'harri', 'dreari', 'hapless', 'perpetu', 'juli', 'edit', 'bond', 'alic', 'depart', 'robot', 'conquer', 'forth', 'shortcom', 'critic', 'novel', 'suprem', 'flame', 'well', 'carradin', 'willi', 'orphan', 'japanes', 'hopeless', 'set', 'le', 'push', 'sergeant', 'studi', 'franci', 'glow', 'exercis', 'owen', 'martial', 'chronicl', 'pop', 'flirt', 'gangster', 'snl', 'dave', 'rub', 'philip', 'hardcor', 'oper', 'walker', 'paltrow', 'porno', 'salli', 'caricatur', 'guilti', 'fifth', 'disabl', 'twice', 'next', 'public', 'concept', 'pace', 'rocki', 'seller', 'detail', 'collaps', 'darker', 'berlin', 'tast', 'fri', 'least', 'survivor', 'scheme', 'gonna', 'claim', 'west', 'legal', '19th', 'bollywood', 'blond', 'england', 'ga', 'pervert', 'sissi', 'lone', 'exchang', 'ladi', 'buck', 'tribe', 'poison', 'kurosawa', 'clerk', 'utter', 'huston', 'crew', 'inexplic', 'integr', 'somehow', 'rukh', 'businessman', 'lewi', 'celluloid', 'add', 'whack', 'danni', 'tower', 'accomplish', 'listen', 'focus', 'complic', 'escap', 'preciou', 'caught', 'ban', 'incompet', 'elizabeth', 'program', 'gone', 'rebel', 'tell', 'robert', 'search', 'black', 'disc', 'interpret', '2003', 'outdat', 'along', 'bogu', 'intrigu', 'cabl', 'neurot', 'flynn', 'mesmer', 'worm', 'franki', 'presid', 'disast', 'cinemat', 'reason', 'variou', 'suddenli', 'await', 'code', 'rape', 'nerv', 'locat', 'boston', 'awesom', 'reserv', 'retriev', 'rental', 'chao', 'poe', 'lol', 'monoton', 'lengthi', 'asian', 'magic', 'elev', 'accompani', 'childish', 'parson', 'climb', 'definit', 'deliber', 'accent', 'scorses', 'movi', 'beer', 'rural', 'composit', 'yell', 'may', 'mtv', 'imperson', 'global', 'neg', 'bud', 'underworld', 'emphasi', 'compos', '1994', 'although', 'instantli', 'award', 'eccentr', 'centr', 'horrif', 'recit', 'cover', 'heart', 'command', 'employ', 'jake', 'exterior', 'distract', 'blend', 'stone', 'geniu', 'feel', 'structur', 'mormon', 'funer', 'roll', 'could', 'vincent', 'law', 'appreci', 'watch', 'fish', 'detach', 'sailor', 'kung', 'pacino', 'bait', 'pictur', 'endless', 'initi', 'lugosi', 'broke', 'bourn', 'wake', 'disagre', 'decept', 'win', 'still', 'bernard', 'com', 'sharp', 'renaiss', 'immort', 'closest', 'fulli', 'apolog', 'nurs', 'curs', 'neck', 'rooney', 'bit', 'bent', 'weak', 'ex', 'scoop', 'fairli', 'promot', 'easi', 'becam', 'evan', 'ghost', 'peak', 'forgiv', 'streep', 'alien', 'sh', 'descend', 'mood', 'poignant', '000', 'keep', 'insert', 'justifi', '1981', 'action', 'unit', 'resolv', 'hk', 'perspect', 'sequel', 'album', 'baker', 'quot', 'introduc', 'alex', 'rich', 'blackmail', 'circumst', 'massacr', 'child', 'signal', 'duo', 'danger', 'brand', 'writer', 'graduat', '11', 'tim', 'otherwis', 'snake', 'sorrow', 'must', 'crack', 'hello', 'either', 'keith', 'compass', 'refresh', 'legaci', 'fred', 'place', 'bunch', 'collector', 'surprisingli', 'fool', 'flair', 'ala', 'passabl', 'wow', 'craze', 'discern', 'sleep', 'bedroom', 'cue', 'belong', 'cousin', 'mainstream', 'viewpoint', 'fare', 'choreographi', 'mock', 'hugh', 'johnni', 'digniti', 'constantli', 'glanc', 'professor', 'hulk', 'need', 'rhyme', 'moment', 'sean', 'bsg', 'headach', 'accid', 'focu', 'attenborough', 'bust', 'par', 'valley', 'dragon', 'griffith', 'instal', '15', 'patricia', 'frantic', 'puppi', 'toler', 'bubbl', 'record', 'color', 'testament', 'consid', 'crude', 'seedi', 'properti', 'crash', 'greed', 'coat', 'rod', 'quit', 'bad', 'ed', 'assum', 'wolf', 'politician', 'worship', 'diari', 'semi', 'typic', 'hostag', 'merci', 'entir', 'richard', 'bush', 'fix', 'michel', 'sadli', 'vain', 'minimum', 'kudo', 'missil', 'racist', 'miracl', 'layer', 'brillianc', 'religion', 'translat', 'confirm', 'arrang', 'obstacl', 'aggress', 'bowl', 'concern', 'environ', 'foxx', 'elabor', 'tribut', 'pack', 'knock', 'montana', 'phil', 'hallucin', 'appal', 'pilot', 'enlist', 'armi', 'ashley', 'chan', 'ned', 'festiv', 'stay', 'social', 'domin', 'street', 'attract', 'northern', 'score', 'estrang', 'friend', 'paula', 'attempt', 'melodrama', 'week', 'ideolog', 'profound', 'wrote', 'dad', 'importantli', 'build', 'comic', 'ad', 'king', 'craft', 'abound', 'new', 'dean', 'fake', 'steal', 'spent', 'bulli', 'spoken', 'horrid', 'categori', 'davi', 'boyfriend', 'spielberg', 'piano', 'string', 'romant', 'cloth', 'superbl', 'engross', 'chuck', 'veteran', 'thug', 'carlo', 'mute', 'rival', 'tape', 'dump', 'thirti', 'squar', 'carey', 'mst3k', 'doom', 'hopkin', 'descript', 'slam', 'toronto', 'lou', 'disgrac', 'elect', 'arrog', 'laughter', 'unattract', 'vacat', 'aid', 'absent', 'loss', 'vile', 'heard', 'liner', 'convolut', 'duke', 'threat', 'fail', 'hara', 'guest', 'stuff', 'mode', 'fleet', 'branagh', 'experiment', 'mitch', 'failur', 'uncut', 'stake', 'brian', '80', 'group', 'aris', 'orlean', 'seen', 'paxton', 'improb', 'call', 'spit', 'brilliantli', 'frankenstein', 'obtain', 'prom', 'howl', 'lampoon', 'violent', '45', 'spade', 'provid', 'matt', 'ham', 'soviet', 'fals', 'properli', 'mental', 'everi', 'ventur', 'simultan', 'captain', 'out', 'joke', 'airplan', 'alan', 'felix', 'liu', 'effort', 'duel', 'similarli', 'anyth', 'racism', 'sabrina', 'spoiler', 'render', 'lucki', 'natali', 'absurd', 'terrif', 'jacket', 'visual', 'dian', 'catherin', 'beatl', 'crippl', 'deserv', 'maria', 'spend', 'chair', 'amongst', 'exagger', 'let', 'grandfath', 'sinc', 'halfway', 'smith', 'ordinari', 'spring', 'hatr', 'murder', 'barri', 'adolesc', 'land', 'persona', 'frontal', 'bell', 'jesu', 'rip', 'advic', 'absenc', 'purchas', 'buster', 'catchi', 'author', 'psychiatrist', 'hain', 'imdb', 'subtleti', '1940', 'debat', 'gregori', 'trip', 'jimmi', 'virgin', 'geni', 'predecessor', 'cameron', 'closer', 'segment', 'ought', 'duval', 'happen', 'invent', 'basketbal', 'countrysid', 'electr', 'nobl', 'unforgett', 'climax', 'stale', 'unusu', 'truth', 'famou', 'dolph', 'stereotyp', 'patrick', 'inner', 'worri', 'randomli', 'infam', 'sutherland', 'suspect', 'palanc', 'what', 'flashi', 'fluff', 'awak', 'cap', 'victori', 'mum', 'ignor', 'defens', 'period', 'want', 'choru', 'broken', 'chosen', 'eat', 'lavish', 'paus', 'minim', 'fit', 'vanc', 'streisand', 'undermin', 'babe', 'flip', 'bakshi', 'serv', 'pattern', 'bring', 'chop', 'tortur', 'grim', 'target', 'corman', 'wore', 'omin', 'truman', 'appar', 'top', 'hack', 'slapstick', 'deal', 'andr', 'sit', 'eva', 'troop', 'hat', 'uniformli', 'excess', 'sens', 'transport', 'shout', 'mostli', 'rehash', 'pervers', 'ego', 'aspect', 'ben', 'prey', 'belief', 'erot', 'conserv', 'ugli', 'togeth', 'ethan', 'interest', 'lose', 'actor', 'preced', 'kolchak', 'reaction', 'fifteen', 'note', 'statement', 'malon', 'artifici', 'tackl', 'galaxi', 'invit', 'anchor', 'capabl', 'loneli', 'whilst', 'ensur', 'alcohol', 'slick', 'elvira', 'cinderella', 'dead', 'grow', 'goe', 'convert', 'stuck', 'mild', 'farm', 'sitcom', 'mad', 'best', 'beach', 'clad', 'ariel', 'pete', 'sooner', 'tech', 'lack', 'hurt', 'endur', 'whether', 'firm', 'emerg', 'senat', 'text', 'drivel', 'bunni', 'confus', 'darren', 'carmen', 'grip', '1970', 'bean', 'kinda', 'merit', 'contain', 'catch', 'seven', 'advantag', 'kind', '99', 'help', 'admit', 'brutal', 'presum', 'variat', 'orient', 'neil', 'info', 'ian', 'vet', 'doubl', 'rest', 'miscast', 'absolut', 'scariest', 'relief', 'howard', 'lend', 'excel', 'instanc', 'giggl', 'slip', 'anyway', 'introduct', 'documentari', 'strongli', 'flawless', 'joel', 'toy', 'dancer', 'arguabl', 'tabl', 'suppos', 'loath', 'propaganda', 'contempl', 'ye', 'manner', 'bridget', 'horrend', 'asset', '90', 'technicolor', 'psychic', 'theatric', 'screw', 'expedit', 'tommi', 'ingrid', 'adventur', 'video', 'soundtrack', 'inflict', 'bikini', 'height', 'ann', 'modest', 'weight', 'mummi', 'palac', 'preachi', 'slash', 'earli', 'revolt', 'terri', 'shut', 'quaid', 'thriller', 'seriou', 'freeman', 'cheap', 'watchabl', 'plu', 'revers', 'david', 'trigger', 'yeah', 'threw', 'slaughter', 'free', 'creatur', 'andrew', 'shortli', 'fox', 'whore', 'tip', 'fishburn', 'jew', 'round', '1983', 'maniac', 'remot', 'exactli', 'earn', 'humour', 'relentless', 'peter', 'spawn', 'fund', 'solo', 'also', 'clara', 'main', 'affleck', 'lay', 'block', '10', 'post', 'date', 'beyond', 'harvey', 'adult', 'nope', 'reviv', 'digit', 'indiffer', 'fruit', 'citizen', 'pretenti', 'grant', 'colonel', 'held', 'view', 'grown', 'neighbor', 'soderbergh', 'though', 'might', 'hospit', 'photograph', 'robber', 'karl', 'recycl', 'journalist', 'great', 'publish', 'em', 'civil', 'birthday', 'beauti', 'cure', 'secondli', 'cancer', 'titl', 'santa', 'trend', 'arriv', 'hartley', 'audit', '25', 'order', 'aristocrat', 'pokemon', 'rita', 'doc', 'throat', 'lesli', 'britain', 'allow', 'worst', 'french', 'salt', 'buddi', 'rapist', 'kim', 'away', 'countri', 'forgot', 'tough', 'amitabh', 'candi', 'increasingli', 'sheer', 'scriptwrit', 'folk', 'sneak', '19', 'goldsworthi', 'look', 'save', 'breast', 'smell', 'gunga', 'solut', 'photographi', 'penn', 'cabin', 'ted', 'templ', 'alley', 'lawrenc', 'charg', 'eventu', 'charli', '1971', 'vietnam', 'epitom', '1976', 'inspector', 'mobil', 'scott', 'pamela', 'meant', 'chang', 'massiv', 'film', 'function', 'faith', 'watcher', 'kurt', 'stronger', 'hammer', 'bow', 'obviou', 'deriv', 'high', 'nail', 'paul', 'etc', 'pulp', 'rather', 'indic', 'util', 'secretli', 'milo', 'seed', '14', 'pirat', 'kiss', 'contribut', 'gere', 'flight', 'honestli', 'cg', 'brenda', 'prior', 'occasion', 'consciou', 'speci', 'adopt', 'analysi', 'theater', 'titan', 'melodi', 'kenneth', 'gabriel', 'boat', 'brain', 'fantast', 'televis', 'fear', 'roommat', 'casual', 'heartbreak', 'axe', 'awaken', 'patriot', 'abandon', 'downey', 'asid', 'grate', 'canyon', 'describ', 'boss', 'pitch', 'wise', 'subject', 'scientif', 'jane', 'stalk', 'palma', 'surpris', 'jam', 'world', 'life', 'pun', 'donald', 'unintent', 'bitch', 'winter', '20', 'julian', 'dilemma', 'stan', 'inhabit', 'air', 'valid', 'sure', 'mile', 'girl', 'indian', 'imagin', 'saga', 'explos', 'enlighten', 'ruthless', 'robinson', 'kyle', 'frighten', 'convinc', 'luke', 'cri', 'melt', 'bro', 'vengeanc', 'trademark', 'get', 'minist', 'later', 'john', 'mobster', 'tension', 'medic', 'end', 'beg', 'stallon', 'approach', 'moron', 'surreal', 'degrad', 'multi', 'falk', 'warm', 'train', 'secret', 'brook', '40', 'page', 'mike', 'down', 'unfair', 'sirk', 'tad', 'children', 'balanc', 'steam', 'channel', 'empti', 'regard', 'texa', 'daisi', 'villag', 'exit', 'forest', 'sympath', 'badli', 'drive', 'retard', 'improvis', 'cusack', 'notori', 'contact', 'sophi', 'immers', 'scarecrow', 'spoke', 'finish', 'storylin', 'harder', 'nichola', 'ice', 'dismal', 'disord', 'break', 'sleaz', 'thought', 'cheer', 'bound', 'profil', '13', 'voic', 'abc', 'niro', 'ultra', 'dub', 'built', 'hollow', 'mindless', 'triangl', 'store', 'pat', 'soprano', 'hesit', 'recent', 'unbeliev', 'pick', 'de', 'secretari', 'frog', 'assassin', 'damm', 'connect', 'voight', 'cb', 'often', 'belli', 'electron', 'incomprehens', 'strictli', 'wick', 'examin', 'suspici', 'athlet', 'www', 'panic', 'pose', 'compar', 'overdon', 'dr', 'militari', 'real', 'drove', 'stark', 'tourist', 'disney', 'difficulti', 'younger', 'nuditi', 'casino', 'replay', 'startl', 'winchest', '50', 'ami', 'yesterday', 'dislik', 'phoni', 'surround', 'loyalti', 'wang', 'peopl', 'anywher', 'airport', 'quirki', 'continu', 'alter', 'incoher', 'intend', 'radic', 'ball', 'anticip', 'zane', 'primit', 'gold', 'seagal', 'oil', 'emphas', 'gerard', 'michael', 'trap', 'understood', 'huge', 'therefor', 'overlong', 'bigger', 'haunt', '1991', 'newer', 'present', 'bear', 'carla', 'tremend', 'chain', 'downhil', 'eighti', 'barrel', 'squad', 'field', 'suppli', 'aunt', 'conclus', '35', 'posey', 'literari', 'ryan', 'eeri', 'preming', 'unorigin', 'nostalg', 'met', 'school', 'asleep', '2008', 'slasher', 'joe', 'reli', 'critiqu', 'premis', 'meander', 'errol', 'worn', 'evid', 'fetch', 'hope', 'modesti', 'crawford', 'percept', 'resist', 'connor', 'varieti', 'menac', 'jean', 'poker', 'robbin', 'poor', 'journey', 'ludicr', 'possibl', 'older', 'african', 'chemistri', 'passag', 'leather', 'dish', 'incident', 'unfortun', 'hair', 'neglect', 'accur', 'judd', 'der', 'row', 'et', 'highlight', 'clair', 'match', 'pretend', 'brooklyn', 'holiday', 'huh', 'mildr', '4th', 'heck', 'christi', 'plastic', 'grayson', 'heal', 'rave', 'lesbian', 'transit', 'cd', 'evok', 'deadli', 'wood', 'christ', 'descent', 'everybodi', 'eager', 'jeff', 'vibrant', 'thoroughli', 'academi', 'walsh', 'indulg', 'milk', 'cup', 'phenomenon', 'media', 'given', 'spark', 'fortun', 'heartfelt', 'screenplay', 'bland', 'phillip', 'sever', 'wizard', 'minu', 'brother', 'memori', 'wayn', 'sustain', 'boxer', 'blank', 'spoof', 'curiou', 'earlier', 'competit', 'jenni', 'size', 'harsh', 'clumsi', 'urban', 'marion', '1979', 'heaven', 'contempt', 'chines', 'click', 'clueless', 'fanci', 'edgar', 'quiet', 'ear', 'neill', 'wing', 'parodi', 'cagney', 'fi', 'housewif', 'miik', 'cool', 'theori', 'safe', 'simmon', 'mario', 'pie', 'rid', 'effici', 'metal', 'clean', 'realis', '1996', 'poorli', 'expens', 'area', 'know', 'tea', 'chainsaw', 'tool', 'trait', 'filth', 'wheel', 'mystic', 'tragedi', 'yard', 'coffe', 'ancient', 'worthwhil', 'war', 'che', 'step', 'cop', 'state', 'wendigo', 'bold', 'depend', 'highest', 'mortal', 'desper', 'reed', 'monti', 'evelyn', 'martian', 'comput', 'dear', 'park', 'remak', 'insult', 'morri', 'cerebr', 'leap', 'illustr', 'origin', 'garden', 'resurrect', 'recogn', 'mol', 'robberi', 'carol', 'plate', 'oblig', 'fist', 'half', 'movement', 'song', 'plane', 'sloppi', 'rumor', 'barbara', 'hong', 'ensembl', 'rose', 'chavez', 'own', 'pleas', 'grand', 'destini', 'rap', 'joker', 'scar', 'dealer', 'trivia', 'expect', 'hyster', 'birth', 'whip', 'prison', 'taylor', 'reel', 'gain', 'scenario', 'like', 'ridicul', 'mysteri', 'norm', 'feet', 'israel', 'boob', 'one', 'psych', 'remad', 'samurai', 'label', 'avoid', 'none', 'anton', 'wilder', 'eaten', 'stolen', 'wholli', 'literatur', 'wanna', 'shower', 'sue', 'phantom', 'enough', 'vivid', 'charm', 'spot', 'summer', 'preston', 'dozen', 'amor', 'trace', 'anyon', 'quietli', 'bill', 'nonetheless', 'standard', 'acclaim', 'aliv', 'gap', 'attack', 'phone', 'except', 'proceed', 'nomin', 'sleazi', 'edgi', 'motiv', 'realism', 'ie', 'chief', 'knife', 'enthusiast', 'gillian', 'greet', 'alreadi', 'pit', 'narrow', 'max', 'culmin', 'cow', 'shelley', 'extens', 'unimagin', 'growth', 'egg', 'accus', 'barrymor', 'needless', 'jade', 'angst', 'wherea', 'fairi', 'twilight', 'mix', 'insid', 'timeless', 'clau', 'favourit', 'coupl', 'carri', 'rachel', 'pull', 'repetit', 'werewolf', 'window', 'late', 'magazin', 'franchis', 'sorri', 'tokyo', 'genuin', 'inan', 'ambigu', 'bag', 'enabl', 'crush', 'prequel', 'minor', 'meaningless', 'necessarili', 'legend', 'carpent', 'antic', 'colour', 'princip', 'ponder', 'mar', 'abraham', 'promis', 'achiev', 'mccoy', 'steel', 'empathi', 'clash', 'dawson', 'kathryn', 'pro', 'whatsoev', 'gal', 'navi', 'part', 'sexi', 'maintain', 'reliabl', 'championship', 'wear', 'base', 'domino', '1936', 'cheesi', 'hideou', 'norman', 'larri', 'likewis', 'upon', 'imit', 'instant', 'hostil', 'senior', 'dread', 'tens', 'briefli', 'zizek', 'pig', 'price', 'bed', 'love', 'walter', 'futurist', 'network', 'bar', 'clone', 'mutant', 'oldest', 'struggl', 'hippi', '13th', 'boyer', 'spooki', 'delic', 'region', 'lang', 'bye', 'explod', 'zoom', 'dylan', 'die', 'divers', 'surgeri', 'hilari', 'kingdom', 'mountain', 'enforc', 'iran', 'allen', 'harm', 'spray', 'wooden', 'cross', 'contemporari', 'disgust', 'junior', 'strong', 'blatant', 'ireland', 'tempt', 'joseph', 'artist', 'skull', 'burton', 'unreal', 'lui', 'higher', 'isra', 'foolish', 'death', 'comb', 'pant', 'jaw', 'blatantli', '73', 'packag', 'campbel', 'drake', 'stimul', 'banal', '70', 'member', 'director', 'russian', 'dress', 'passion', 'actress', 'bone', 'impact', 'bachelor', 'resolut', 'check', 'jacki', 'sci', 'clear', 'sink', 'calm', 'billi', 'plenti', 'devil', 'due', 'process', 'ago', 'simpli', 'filler', 'visit', 'inferior', 'burn', 'bake', 'care', 'discov', 'fabric', 'lane', 'sin', 'heartwarm', 'madonna', 'compel', 'someday', 'orson', 'tomato', '00', 'christian', 'share', 'maker', 'germani', 'luca', 'societi', 'dark', 'http', 'known', 'distribut', 'roar', 'hang', 'five', 'pile', 'cain', 'today', 'nicola', 'mathieu', 'cia', 'irrit', 'element', 'twenti', 'sung', 'eleph', '1990', 'besid', 'fisher', 'seldom', '95', 'childhood', 'document', 'senseless', 'warren', 'judgment', 'certainli', 'grudg', 'central', 'imageri', 'suffici', 'bother', 'cartoon', 'torn', 'annoy', 'door', 'health', 'outrag', 'extent', 'team', 'household', 'star', 'akshay', 'persuad', 'metaphor', 'dumb', 'ha', 'strive', 'hung', 'cring', 'manhattan', 'conscienc', 'cigarett', 'everyday', 'citi', 'offend', 'singl', 'ritchi', 'curtain', 'justin', 'notabl', 'relev', 'user', 'bacal', 'drunk', 'religi', 'dana', 'lousi', 'industri', 'weav', 'kazan', 'name', 'ford', 'marc', 'final', 'scene', 'boo', 'reminisc', 'sox', 'gotten', 'delici', 'nice', '2002', 'govern', 'deniro', 'hopper', 'ruth', 'wont', 'braveheart', 'grey', 'restaur', 'quarter', 'gilliam', 'old', 'class', 'daniel', 'wide', 'thick', 'recreat', 'educ', 'stick', 'sad', 'slimi', 'simpl', 'funni', 'boy', 'pursuit', 'commend', 'prefer', 'highway', 'promin', 'difficult', '1933', 'strand', 'termin', 'histori', 'pre', 'mani', 'charact', 'pepper', '2004', 'associ', 'distress', 'belov', 'outright', 'sensit', 'oh', 'bike', 'springer', 'tendenc', 'outfit', 'flop', 'miniseri', 'rosemari', 'count', 'prank', 'roller', 'sentenc', 'eve', 'outstand', 'jessica', 'pale', 'nostalgia', 'pathet', 'worth', 'jack', 'coloni', 'boom', 'fever', 'rider', 'swept', 'tarzan', 'derang', 'midnight', 'river', 'letter', 'unleash', 'solv', 'instruct', 'race', 'goldberg', 'ultim', 'mid', 'leagu', 'khan', 'gave', 'stanley', 'seek', 'holli', 'laurel', 'basket', 'follow', 'will', 'isabel', 'vader', 'dim', 'divin', 'white', 'confess', 'rabbit', 'prejudic', 'trier', 'topic', 'tack', 'directori', 'finger', 'mail', 'grave', 'disbelief', 'valuabl', 'shield', 'elit', 'dubiou', 'elsewher', 'map', 'women', 'gun', 'revel', 'fatal', 'trust', 'pot', 'aussi', 'pioneer', 'institut', 'book', 'suspicion', 'uneven', 'monologu', 'polanski', 'gender', 'peer', 'buy', 'partli', 'dog', 'randi', 'came', 'equip', 'holocaust', 'truli', 'vote', 'small', 'ginger', 'human', 'conspiraci', 'assembl', 'owner', 'identifi', 'stiff', 'energet', 'terribl', 'cage', 'poster', 'dracula', 'planet', 'art', 'obnoxi', 'core', 'ensu', 'naschi', 'vignett', 'redeem', 'ahead', 'primarili', 'off', 'paramount', '18', 'polit', 'grace', 'prize', 'wors', 'numer', 'convict', 'lake', 'inaccur', 'breathtak', 'lift', 'congratul', 'observ', 'appl', 'enhanc', 'sum', 'despic', 'favour', 'tap', 'eye', 'bird', 'pan', 'hungri', 'preview', 'alarm', 'viciou', 'overr', 'repres', 'arnold', 'healthi', 'tarantino', 'tone', 'scotland', 'directli', 'invest', 'vari', 'float', 'tender', 'clue', 'sold', 'success', 'reject', 'unlik', 'wish', 'happi', 'deepli', 'repli', 'tight', 'backdrop', 'climact', 'closet', 'chicago', 'realm', 'naughti', 'lex', 'gore', 'newcom', 'skit', 'item', 'glare', 'smoke', 'fulci', 'commit', 'carlito', 'engag', 'compani', 'token', 'canadian', 'aka', 'ass', 'fragil', 'anim', 'parker', 'rate', 'cut', 'discuss', 'english', 'sir', 'inde', 'saw', 'biko', 'urg', 'cult', 'fault', 'keen', 'hammi', 'learn', 'spread', 'gate', 'distinct', 'styliz', 'killer', '100', 'fellow', 'laugh', 'cecil', 'coaster', 'orang', 'monster', 'injur', 'orchestr', 'write', 'immens', 'thousand', 'cardboard', 'thu', 'accident', 'background', 'li', 'shakespear', 'divid', 'parallel', 'behav', 'dud', 'curli', 'bomb', 'uk', 'despit', 'hank', 'long', 'convey', 'us', 'unless', 'disguis', 'whoopi', 'laughabl', 'coast', 'test', 'condit', 'sensat', 'right', 'louis', 'perceiv', 'asham', 'rifl', 'individu', 'never', 'travesti', 'topless', 'repress', 'spock', 'claustrophob', 'motorcycl', 'pregnant', 'banter', 'anoth', 'goldblum', 'expert', 'sea', 'legitim', '1945', 'stephen', 'session', 'april', 'slide', 'narrat', 'transform', 'pretens', 'constitut', 'nightclub', 'andi', 'spi', 'affect', 'accord', 'paper', 'sympathi', 'kidman', 'aspir', 'quick', 'psychopath', 'hackney', 'schlock', 'mclaglen', 'pleasantli', 'innoc', 'treat', 'tyler', 'miyazaki', 'olivi', 'plagu', 'moreov', 'deed', 'shark', 'scottish', 'timothi', 'betti', 'guess', 'monk', 'charlott', 'spree', 'mistak', 'irrelev', '1960', 'neo', 'shade', 'drum', 'sid', 'jr', 'rome', 'mind', 'lab', 'kane', 'ton', 'even', 'retain', 'appropri', 'niec', 'forgotten', 'bett', 'unexpectedli', 'project', 'inept', 'edward', 'read', 'kline', 'circl', 'extra', 'grief', 'bloodi', 'probabl', 'sacrific', 'anybodi', 'al', 'flow', 'astonish', 'exhaust', 'hood', 'vital', 'necessari', 'chick', 'wendi', 'span', 'counterpart', 'water', 'tree', 'age', 'karen', 'ocean', 'sing', 'underground', 'surfac', 'nuclear', 'superman', 'lemmon', 'argu', 'brilliant', 'trail', 'abomin', 'proud', 'acid', 'be', 'columbo', 'european', 'way', 'fact', 'mitchel', 'candid', 'glimps', 'georg', 'bang', 'rule', 'stumbl', 'lust', 'south', 'pursu', 'young', 'olli', 'driven', 'abysm', 'faster', 'forgett', 'corrupt', 'pleasant', 'josh', 'traffic', 'heat', 'othello', 'unwatch', 'measur', 'manag', 'hyde', 'victoria', 'butt', 'destin', 'conclud', 'mixtur', 'independ', 'risk', 'foil', 'virginia', 'oscar', 'spirit', 'wast', 'dialog', 'recruit', 'strength', 'ambit', 'polish', 'suggest', 'memor', 'candl', 'sibl', 'decor', 'rain', 'fenc', 'muslim', 'reluct', 'effect', 'leav', 'hour', 'woo', 'consist', 'frank', 'illus', 'spinal', 'close', 'screenwrit', 'repeatedli', 'satan', 'habit', 'awhil', 'jackson', 'hokey', 'infect', 'linda', 'despis', 'dimension', 'differ', 'hip', 'brood', 'wipe', 'prais', 'lili', 'studio', 'board', 'rotten', 'chaplin', 'sam', 'witch', 'presenc', 'brown', 'less', 'toward', 'cuba', '1987', 'sucker', 'send', 'woodi', 'recogniz', 'pole', 'swear', 'stun', 'overwhelm', 'uncomfort', 'univers', 'trash', 'inabl', 'purpl', 'circu', 'destruct', 'told', 'miseri', 'whale', 'crime', 'grinch', 'karloff', 'prove', 'meet', 'dutch', 'situat', 'combin', 'aesthet', 'reader', 'tiresom', 'list', 'plot', 'fifti', 'tooth', 'septemb', 'alexandr', 'dive', 'reflect', 'showcas', 'vega', 'blur', 'cent', 'german', 'phrase', 'rubber', 'includ', 'okay', 'jonathan', 'marri', 'meryl', 'cameo', 'invad', 'hold', 'abrupt', 'we', 'stargat', 'homosexu', 'rare', 'agenc', 'casper', 'peac', 'puzzl', 'roger', 'landscap', 'parent', 'co', 'shallow', 'exist', 'howev', 'greatli', 'homag', 'somewhat', 'investig', 'handl', 'scare', 'mask', 'rendit', 'scoobi', 'gentleman', 'unfunni', 'station', 'jill', 'ground', 'rock', 'alfr', 'cultur', 'restrain', '1920', 'cinematographi', 'gift', 'gothic', 'sincer', 'perhap', 'glass', 'morbid', 'lot', 'anger', 'zone', 'loud', 'charl', 'gadget', '1997', 'sword', 'thin', 'stair', 'fan', 'gem', 'stream', 'forward', 'indi', 'un', 'clever', 'frankli', 'mediocr', 'swing', 'forgiven', 'sniper', 'vulner', 'teen', 'melissa', 'wrong', 'deni', 'dull', 'boot', 'interview', '2nd', 'relationship', 'friendship', 'damag', 'amateur', 'han', 'van', 'liberti', 'four', 'hate', 'perman', 'alert', 'lumet', 'come', 'pass', 'repuls', 'reach', 'spectacl', 'back', 'find', 'curios', 'protest', 'itali', 'wtf', 'fourth', 'kid', 'gag', 'mistress', 'drone', 'leo', 'kansa', 'direct', 'bruce', 'naiv', 'anderson', 'tame', 'instrument', 'drip', '1968', 'colleg', 'reunit', 'servic', 'blood', 'eric', 'realiz', '1950', 'fill', 'slight', 'futur', 'echo', 'consum', 'alec', 'drug', 'stuart', 'teacher', 'pal', 'essenti', 'rang', 'format', 'contract', 'lauren', 'nasti', 'flee', 'confin', 'courag', 'mayb', 'backward', 'thrill', 'dinner', 'bitten', 'helen', 'cannon', 'antholog', 'disturb', 'punch', 'pocket', 'ok', 'quest', 'rochest', 'enter', 'henri', 'blast', 'mari', 'till', 'obsess', 'bread', 'experienc', 'seduct', 'shoddi', 'priceless', 'fresh', 'femal', 'use', 'acknowledg', 'soft', 'maggi', 'desert', 'web', 'sunday', 'la', 'luck', 'abl', 'profess', 'leonard', 'knee', 'sleepwalk', 'smaller', 'delight', 'artsi', 'blunt', 'albeit', 'mine', 'confid', 'portray', 'jodi', 'deceas', 'eight', 'type', 'shadow', 'beneath', 'porn', 'capit', 'spanish', 'stylish', 'gorgeou', 'cold', 'loyal', 'principl', 'crimin', 'last', 'earth', 'spontan', 'fell', 'specif', 'mean', 'jon', 'approv', 'brad', 'tube', 'dimens', 'implaus', 'major', 'sentinel', 'hbo', '1st', 'flag', 'transfer', 'site', 'cleverli', 'fast', 'wig', 'super', 'homer', 'said', 'flaw', 'atroci', 'riot', 'hitchcock', 'got', 'marvel', 'rubbish', 'spider', 'charlton', 'notion', 'loser', 'sweep', 'potter', 'kay', 'regardless', 'town', 'enthral', 'thunderbird', 'net', 'esther', 'stanwyck', 'lacklust', 'matrix', 'turd', 'subplot', 'greedi', 'technolog', 'shift', 'vampir', 'occur', 'crap', 'trailer', 'afford', 'stack', 'upset', 'arm', 'western', 'utterli', 'wife', 'terrifi', 'appeal', 'foster', 'kirk', 'astair', 'popular', 'june', 'slug', 'debt', 'gene', 'wire', 'attent', 'ish', 'cypher', 'wrap', 'bore', 'tomorrow', '2007', 'absorb', 'longer', 'spectacular', 'ration', 'lush', 'astronaut', 'jar', 'choic', 'rude', 'romanc', 'hand', 'ralph', 'cohen', 'parti', 'reduc', 'obscur', 'wet', 'snap', 'drag', 'spine', 'stori', 'provok', 'blake', 'superhero', 'nelson', 'smash', 'comed', 'floor', 'ridden', 'surpass', 'decis', 'arrest', 'press', 'richardson', 'dreck', 'bathroom', 'realiti', 'privat', 'whenev', 'previou', 'warmth', 'couch', 'deliver', 'straight', 'feast', 'flashback', 'fluid', '1984', 'driver', 'owe', 'mann', 'toilet', 'grasp', 'avail', 'via', 'belushi', 'kiddi', 'speed', 'mutual', '2000', 'preposter', 'unrel', 'articl', 'subtitl', 'corps', 'serious', 'bodi', 'disappoint', 'latin', 'seri', 'suicid', 'miller', 'batman', 'rubi', 'word', 'castl', 'stock', 'sport', 'dicken', 'automat', 'bless', 'explan', 'superfici', 'lindsay', 'easier', 'vast', 'fall', 'somebodi', 'claud', 'advanc', 'chees', 'portion', 'supernatur', 'stood', 'cassidi', 'myth', 'boll', 'jeremi', 'kitti', 'dement', 'sat', '1973', 'synopsi', 'settl', 'stalker', 'remark', 'soon', 'fat', 'cher', 'someon', 'glenn', 'emili', 'common', 'australian', 'shell', 'epic', 'nicol', 'protect', 'rememb', 'mechan', 'poke', 'thoma', 'iii', 'labor', 'leigh', 'gundam', 'hepburn', 'oppos', 'anthoni', 'liber', 'graini', 'button', 'optimist', 'ten', 'junk', 'inher', 'matter', 'control', 'cinematograph', 'mark', 'influenc', 'bleed', 'grew', 'jail', 'heist', 'dirt', 'smart', 'morgan', 'preach', 'frustrat', 'proper', 'hopelessli', 'camera', 'room', 'befriend', 'anymor', 'breakdown', 'guilt', 'scarfac', 'refus', 'teeth', 'contrari', 'line', 'monkey', 'behaviour', 'trashi', 'dvd', 'decent', 'humor', 'sometim', 'dazzl', 'devoid', 'nativ', 'ask', 'heavili', 'jump', 'giallo', 'intim', 'road', 'kapoor', 'surviv', 'shanghai', 'enemi', 'evil', 'sent', 'wave', 'lovabl', 'sublim', 'dentist', 'unawar', 'stress', 'pearl', 'hackman', 'attach', 'daughter', 'tini', 'challeng', 'sudden', 'fuel', 'els', 'da', 'sidewalk', 'low', 'grotesqu', 'correctli', 'stoog', 'track', 'middl', 'unrealist', 'lighter', 'cheat', 'rank', 'proport', 'tripe', 'woman', 'dollar', 'misguid', 'gruesom', 'vision', 'amazon', 'sympathet', 'experi', 'liter', 'suspens', 'select', 'kennedi', 'lifetim', '60', 'play', 'explain', 'qualiti', 'mr', 'swim', 'queen', 'recognit', 'jealou', 'creep', 'person', 'boil', 'romp', 'whose', 'inject', 'talki', 'waitress', 'ship', 'cube', 'execut', 'sacrif', 'doctor', 'compris', 'mickey', 'gentl', 'nephew', 'greg', 'faint', 'sun', 'explicit', 'fart', 'ladder', 'applaud', 'onlin', 'method', 'march', 'im', 'molli', 'crappi', 'mcqueen', 'tall', 'mate', 'beaten', 'lowest', '16', 'stretch', 'fantasi', 'thief', '1989', 'breed', 'poverti', 'sweet', 'opposit', 'addit', 'din', 'slow', 'revolv', 'denzel', 'usual', 'nanci', 'gestur', 'communist', 'bronson', 'sammi', 'somewher', 'travel', 'wealth', 'instead', 'jerri', 'ward', 'turner', 'damon', 'wrestler', 'york', 'versu', 'conan', 'fu', 'nearbi', 'destroy', 'mob', 'guy', 'unravel', 'harold', 'bradi', 'favor', 'joan', 'turtl', 'dream', 'yeti', 'believ', 'court', 'underst', 'threaten', 'behold', 'tiger', 'commun', 'marshal', 'valu', 'tom', 'summar', 'natur', 'slave', 'brave', 'cgi', 'jim', 'current', 'kitchen', 'jordan', 'reveng', 'fanat', 'similar', 'jay', 'static', 'stare', 'defend', 'drunken', 'princess', 'showdown', 'brendan', 'cannot', 'cat', 'era', 'london', 'wound', 'premier', 'hidden', 'obligatori', 'mansion', 'decapit', 'answer', 'angri', 'ho', 'ring', 'insight', 'turkey', 'afraid', 'deeper', 'pen', 'cattl', 'baffl', '2006', 'hype', 'custom', 'schedul', 'roof', 'mainli', 'singer', 'murphi', 'fay', 'print', 'sketch', 'wrench', 'sunshin', 'hear', 'breath', 'strike', 'brush', 'defi', 'techniqu', 'whole', 'basement', 'tick', 'gather', 'adam', 'contrast', 'depict', 'unseen', 'intens', 'cours', 'clinic', 'secur', 'inevit', 'daili', 'credibl', 'martha', 'brazil', 'jo', 'ant', 'judi', 'blade', 'devic', 'brit', 'seventi', 'beast', 'correct', 'bank', 'makeup', 'ah', 'bumbl', 'sullivan', 'amount', 'steer', 'basebal', 'altman', 'establish', 'seduc', 'non', 'lord', 'blah', 'icon', 'nathan', 'especi', 'chuckl', 'captur', 'neither', 'throw', 'cathol', 'redund', 'drink', 'choos', 'conflict', 'cartoonish', 'biker', 'uh', 'vanish', 'tour', 'link', 'review', 'wed', 'goal', 'friday', 'subsequ', 'tie', 'tale', '1980', 'ya', 'campaign', 'eastern', 'steven', 'guin', 'beatti', 'qualifi', 'leg', 'advis', 'unconvinc', 'result', 'crisi', 'offic', 'everyth', 'bastard', 'among', 'irish', 'random', 'bare', 'yawn', 'shred', 'paid', 'silli', 'toni', 'perri', '75', 'philosoph', 'econom', 'farmer', 'resembl', 'freddi', 'uniqu', 'hey', 'marriag', 'tasteless', 'coach', 'fontain', 'hill', 'averag', 'famili', 'lifestyl', 'job', 'enterpris', 'smile', 'priest', 'hay', 'wilson', 'yearn', 'insist', 'advertis', 'tragic', 'bergman', 'outing', 'suffer', 'stomach', 'compens', 'intern', 'foot', 'royal', 'essenc', 'phenomen', 'someth', 'unabl', 'accept', '1988', 'widescreen', 'suspend', 'that', 'script', 'decad', 'triumph', 'sheet', 'honor', 'freez', 'dysfunct', 'depress', 'thread', 'enthusiasm', 'agenda', 'conceiv', 'sick', 'antagonist', 'authent', 'mayhem', 'highli', 'assort', 'biblic', 'footag', 'derek', 'taboo', 'oppress', 'band', 'attribut', 'supposedli', 'wrestl', 'robin', 'plan', 'stand', 'europ', 'game', 'convent', 'border', 'cowboy', 'banana', 'stellar', 'fight', 'conduct', 'proof', 'meg', 'idea', 'arrow', 'throughout', 'spiral', 'jennif', 'seemingli', 'rage', 'figur', 'judg', 'fascin', 'undeni', 'system', 'coher', 'give', 'forev', 'steve', 'quinn', 'lifeless', 'knightley', 'vanessa', 'capot', 'scroog', 'hudson', 'calib', 'ellen', 'assert', 'furiou', 'assist', 'damn', 'fonda', 'nose', 'touch', 'updat', 'graphic', '1977', 'altern', 'holi', 'face', 'prophet', 'meanwhil', 'interrupt', 'teenag', 'guid', 'convers', 'trade', 'burst', 'treasur', 'sort', 'dudley', 'australia', 'widow', 'center', 'nicholson', 'assur', 'resid', 'novak', 'day', 'todd', 'sight', 'north', 'jule', 'sake', 'assault', 'exact', 'light', 'realli', 'talent', 'bridg', 'victim', 'likabl', 'kent', 'issu', 'weakest', 'immatur', 'produc', 'moder', 'daddi', 'splatter', 'salman', 'rampag', 'remain', 'agre', 'better', 'magician', 'intellig', 'worthi', 'men', 'time', 'mother', 'saturday', 'deer', 'taxi', 'ritual', 'zombi', 'shame', 'brando', 'hot', 'transcend', 'hoffman', 'bbc', 'outlin', 'unpredict', 'full', 'francisco', 'franco', 'campi', 'far', 'shepherd', 'join', '24', 'man', 'recognis', 'support', 'fallen', 'bitter', 'antwon', 'scream', 'penni', 'doo', 'nyc', 'stupid', 'address', 'inappropri', 'reward', 'insipid', 'reev', 'california', 'agent', 'rant', 'happili', 'hunt', 'bet', 'estat', 'omen', 'feminin', 'blew', 'lynch', 'plight', 'jone', 'altogeth', 'joy', 'extraordinari', 'cancel', 'gordon', 'lip', 'perfectli', 'scandal', 'ranger', 'entranc', 'rais', 'volum', 'cell', 'foul', 'male', 'bulk', 'model', 'glorifi', 'solid', 'leon', 'fine', 'golden', 'went', 'alik', 'christoph', 'bobbi', 'drift', 'atmospher', 'razor', 'bottom', 'newman', 'compliment', 'offens', 'son', 'uncl', 'ingeni', 'trick', 'museum', 'centuri', 'florida', 'excit', '2001', 'spare', 'soul', 'unhappi', 'cast', 'sophist', 'profan', 'ticket', 'homicid', 'vehicl', 'anna', 'venom', 'rough', 'wildli', 'heap', 'radio', 'option', 'kelli', 'nois', 'fair', 'dude', 'student', 'desir', 'nobodi', 'san', 'stroke', 'aborigin', 'peril', 'melodramat', 'baldwin', 'fright', 'react', 'dee', 'root', 'snatch', 'distanc', 'abort', 'mon', 'neighborhood', 'psycho', 'ramon', 'broadcast', 'intent', 'subtl', 'evolv', 'husband', 'idiot', 'lost', 'miracul', 'recommend', 'tacki', 'accuraci', 'awe', 'gilbert', 'nemesi', 'helm', 'million', 'acquaint', 'drop', 'min', 'incred', 'stir', 'extrem', 'embrac', 'newli', 'propos', 'creepi', 'mission', 'mass', 'ident', 'mom', 'gang', 'cliffhang', 'austin', 'tri', 'disastr', 'defeat', 'everywher', 'born', 'horribl', 'cuban', 'ninja', 'sinist', 'gear', 'jazz', 'engin', 'lover', 'attorney', 'raw', 'stilt', 'ride', 'mayor', 'gray', 'exposur', 'vastli', 'edg', '30', 'poetic', 'keaton', 'girlfriend', 'degre', 'cassavet', 'disjoint', 'incorrect', 'flavor', 'invis', 'frontier', 'wallac', 'timberlak', 'act', 'endear', 'humbl', 'injuri', 'randolph', 'margaret', 'fed', 'silver', 'notch', 'complain', 'technic', 'witti', 'moor', 'littl', 'rivet', 'korean', 'coincid', 'familiar', 'porter', 'hotel', 'show', 'nation', 'ceremoni', 'spain', 'tara', 'hint', 'staff', 'powel', 'archiv', 'nightmar', 'countless', 'scale', 'aforement', 'alright', 'entertain', 'edi', 'dealt', 'chapter', 'modern', 'trio', 'scienc', 'problem', 'audio', 'tank', 'live', 'ii', 'funniest', 'analyz', 'duti', 'equival', 'feed', 'crook', 'respect', 'dorothi', 'jet', 'blown', 'simon', 'food', 'hop', 'alon', 'teas', 'comedian', 'fulfil', 'youngest', 'invas', 'wall', 'receiv', 'spice', 'episod', 'symbol', 'combat', 'geek', 'emma', 'st', 'suffic', 'task', 'client', 'cope', 'switch', 'funnier', 'enorm', 'mill', 'bite', 'kill', 'lester', 'palm', 'ideal', 'corner', 'gina', 'host', 'spoil', 'pixar', 'green', 'mytholog', 'dispos', 'chicken', 'frequent', 'consequ', 'pet', 'rush', 'conscious', 'dose', 'unnecessari', 'detect', 'sole', 'research', 'sceneri', 'compet', 'tenant', 'tv', 'glad', 'nineti', 'ace', 'blockbust', 'farc', 'unpleas', 'cape', 'chri', '1930', 'verhoeven', 'ethnic', 'gandhi', 'wind', 'strip', 'normal', 'substanc', 'en', 'without', 'question', 'warner', 'steadi', 'announc', 'flick', 'three', 'object', 'sign', '1974', 'rout', 'awar', 'lock', 'julia', 'shove', 'narr', 'hamlet', 'cannib', 'iraq', 'intellectu', 'felt', 'spacey', 'soccer', 'programm', 'akin', 'straightforward', 'monument', 'open', 'comedi', 'clich', 'mention', 'tune', 'helpless', 'shake', 'uma', 'dash', 'seat', 'sourc', 'freedom', 'sale', 'unknown', 'interestingli', 'savag', 'sinatra', 'policeman', 'resourc', 'lure', 'wit', 'eas', 'viewer', 'lazi', 'declin', 'gabl', 'creativ', 'corn', 'speak', 'selfish', 'toss', 'rooki', 'assign', 'muppet', 'extend', 'kick', 'messi', 'morn', 'clint', 'arc', 'readi', 'captiv', 'appear', 'punk', 'ill', 'pronounc', 'bottl', 'roth', 'chest', 'apart', 'east', 'justic', 'nevertheless', 'millionair', 'bargain', 'interior', 'sandra', 'patient', 'easili', 'terrorist', 'bias', 'dandi', 'albert', 'swallow', 'cost', 'crawl', 'stink', 'mermaid', 'blue', 'greater', 'cush', 'guard', 'plausibl', 'christma', 'refer', 'incorpor', 'austen', 'tend', 'weapon', 'encount', 'hero', 'italian', 'career', 'bash', 'classic', 'concentr', 'account', 'drama', 'jare', 'blair', 'sarah', 'rick', 'helicopt', 'lectur', 'side', 'cute', 'jess', 'pain', 'trilog', 'hors', 'nod', 'popcorn', 'cooki', 'pad', 'basi', 'sheriff', 'event', 'meal', 'cake', 'design', 'wreck', 'wagner', '1999', 'overact', 'context', 'jedi', 'factori', 'impli', 'cooper', 'meyer', 'possess', 'mafia', 'durat', 'embarrass', 'determin', 'twist', 'enjoy', 'replac', 'globe', 'admir', 'mount', 'alway', 'larg', 'gut', 'nake', 'depth', 'pure', 'confront', 'iv', 'requir', 'hit', 'nowaday', 'contriv', 'sky', 'purpos', 'clark', 'cinema', 'reveal', 'russel', 'aveng', 'thing', 'lesson', 'cruel', 'truck', 'china', 'onto', 'logic', 'dougla', 'dri', 'plain', 'medium', 'ol', 'stab', 'ironi', 'clearli', 'alvin', 'concert', 'susan', 'energi', 'pool', 'spell', 'sidekick', 'unsettl', 'nuanc', 'vaniti', 'complex', 'number', 'passeng', 'hitler', 'lundgren', 'elderli', 'master', 'moodi', '20th', 'freak', 'controversi', 'opera', 'gener', 'summari', 'vulgar', 'victor', 'humili', 'silenc', 'began', 'cynic', 'financi', 'rapidli', '22', 'struck', 'hamilton', 'horror', 'biographi', 'slightest', 'ballet', 'greek', 'shelf', 'previous', 'johnson', 'respons', 'ross', 'skin', 'gradual', 'preserv', 'verbal', '3d', 'belt', 'satir', 'bonu', 'battl', 'handsom', 'hart', 'astound', 'financ', 'treatment', 'chess', 'soap', 'prostitut', 'brief', 'around', 'submit', 'montag', 'arab', 'jan', 'miss', 'cant', 'carel', 'resum', 'alleg', 'bright', 'primari', 'masterpiec', 'traumat', 'angela', 'garbo', 'month', 'per', 'transplant', 'sixti', 'split', 'offer', 'bleak', 'whoever', 'wwii', 'ms', 'curti', 'teach', 'fog', 'screen', 'clock', 'hunter', 'smack', 'luci', 'loui', 'flock', 'macarthur', 'trauma', 'creation', 'incid', 'redempt', 'worker', 'version', 'occupi', 'rambo', 'wealthi', 'increas', 'ash', 'unbear', 'shot', 'adequ', 'slightli', 'ron', 'antonio', 'ran', 'pay', 'benefit', 'honest', 'affair', 'ritter', 'twelv', 'virtu', 'lyric', 'product', 'groan', 'fashion', 'unexplain', 'warrior', 'holm', 'sore', 'uninspir', 'theme', 'rel'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.What exactly is going on here? Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account? (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2019-07-29 10:41:42 Starting - Starting the training job... 2019-07-29 10:41:43 Starting - Launching requested ML instances...... 2019-07-29 10:42:44 Starting - Preparing the instances for training...... 2019-07-29 10:44:05 Downloading - Downloading input data 2019-07-29 10:44:05 Training - Downloading the training image.. Arguments: train [2019-07-29:10:44:24:INFO] Running standalone xgboost training. [2019-07-29:10:44:24:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8454.41mb [2019-07-29:10:44:24:INFO] Determined delimiter of CSV input is ',' [10:44:24] S3DistributionType set as FullyReplicated [10:44:26] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2019-07-29:10:44:26:INFO] Determined delimiter of CSV input is ',' [10:44:26] S3DistributionType set as FullyReplicated [10:44:27] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, 2019-07-29 10:44:23 Training - Training image download completed. Training in progress.[10:44:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 8 pruned nodes, max_depth=5 [0]#011train-error:0.3044#011validation-error:0.305 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [10:44:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 8 pruned nodes, max_depth=5 [1]#011train-error:0.298#011validation-error:0.3005 [10:44:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.283133#011validation-error:0.2868 [10:44:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [3]#011train-error:0.276867#011validation-error:0.2789 [10:44:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.271467#011validation-error:0.2757 [10:44:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.2636#011validation-error:0.2719 [10:44:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 14 pruned nodes, max_depth=5 [6]#011train-error:0.256467#011validation-error:0.2681 [10:44:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.249667#011validation-error:0.2603 [10:44:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.240667#011validation-error:0.2516 [10:44:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5 [9]#011train-error:0.2378#011validation-error:0.2493 [10:44:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.2342#011validation-error:0.2451 [10:44:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [11]#011train-error:0.228133#011validation-error:0.239 [10:44:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [12]#011train-error:0.223467#011validation-error:0.2343 [10:44:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.219#011validation-error:0.2324 [10:44:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [14]#011train-error:0.216133#011validation-error:0.2298 [10:44:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [15]#011train-error:0.212867#011validation-error:0.2279 [10:44:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.210267#011validation-error:0.2246 [10:44:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [17]#011train-error:0.2108#011validation-error:0.2248 [10:44:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [18]#011train-error:0.207067#011validation-error:0.2216 [10:44:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.2034#011validation-error:0.2187 [10:44:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [20]#011train-error:0.202467#011validation-error:0.2166 [10:44:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5 [21]#011train-error:0.1996#011validation-error:0.2132 [10:44:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [22]#011train-error:0.197733#011validation-error:0.212 [10:45:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.194933#011validation-error:0.2102 [10:45:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [24]#011train-error:0.192467#011validation-error:0.2104 [10:45:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [25]#011train-error:0.191067#011validation-error:0.2099 [10:45:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [26]#011train-error:0.188733#011validation-error:0.2065 [10:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [27]#011train-error:0.184867#011validation-error:0.2063 [10:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.183467#011validation-error:0.2047 [10:45:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [29]#011train-error:0.181867#011validation-error:0.2053 [10:45:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [30]#011train-error:0.180333#011validation-error:0.2043 [10:45:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.179133#011validation-error:0.2039 [10:45:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [32]#011train-error:0.177533#011validation-error:0.2027 [10:45:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [33]#011train-error:0.175067#011validation-error:0.2023 [10:45:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [34]#011train-error:0.174133#011validation-error:0.2008 [10:45:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [35]#011train-error:0.170867#011validation-error:0.2011 [10:45:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.169933#011validation-error:0.202 [10:45:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [37]#011train-error:0.169667#011validation-error:0.2002 [10:45:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [38]#011train-error:0.168867#011validation-error:0.1997 [10:45:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [39]#011train-error:0.167933#011validation-error:0.1976 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ..............................................! ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.4 KiB (4.1 MiB/s) with 1 file(s) remaining Completed 366.4 KiB/366.4 KiB (5.8 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-129722534204/xgboost-2019-07-29-10-47-26-110/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output --------------------------------------------------------------------------------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) ###Output Read features from cache file: bow_features.pkl ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output INFO:sagemaker:Creating training-job with name: xgboost-2019-04-25-15-52-26-220 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output INFO:sagemaker:Creating model with name: xgboost-2019-04-25-15-52-26-220 ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output INFO:sagemaker:Creating transform job with name: xgboost-2019-04-25-16-00-15-508 ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ................................................! ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-eu-west-1-345073139350/xgboost-2019-04-25-16-00-15-508/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() xgb_transformer.output_path !aws s3 cp --recursive $xgb_transformer.output_path $data_dir import sys # These are the usual ipython objects, including this one you are creating ipython_vars = ['In', 'Out', 'exit', 'quit', 'get_ipython', 'ipython_vars'] # Get a sorted list of the objects and their sizes sorted([(x, sys.getsizeof(globals().get(x))) for x in dir() if not x.startswith('_') and x not in sys.modules and x not in ipython_vars], key=lambda x: x[1], reverse=True) ###Output _____no_output_____ ###Markdown As usual, we copy the results of the batch transform job to our local instance. Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') xgb_predictor.delete_endpoint() ###Output INFO:sagemaker:Deleting endpoint configuration with name: xgboost-2019-04-25-14-07-56-367 INFO:sagemaker:Deleting endpoint with name: xgboost-2019-04-25-14-07-56-367 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['nobodi', 'could', 'like', 'movi', 'merit', 'sens', 'humor', 'enjoy', 'schlock', 'movi', 'mst3', 'qualiti', 'rank', 'road', 'hous', 'preposter', 'charact', 'set', 'stori', 'line', 'bad', 'write', 'realli', 'crack', 'want', 'dust', 'guy', 'instead', 'dust', 'guy', 'f', '14', 'take', 'carrier', 'get', 'format', 'f', '16', 'without', 'hint', 'anger', 'skeptic', 'segal', 'goe', 'back', 'work', 'gener', 'minut', 'overse', 'covert', 'mind', 'wipe', 'seagal', 'segal', 'run', 'bullet', 'resort', 'knife', 'kill', 'guard', 'natur', 'guard', 'drop', 'gun', 'fight', 'knive', 'hand', 'grenad', 'dud', 'explod', 'anyway', 'littl', 'stealth', 'fighter', 'fli', 'way', 'california', 'afganistan', 'without', 'refuel', 'segal', 'fli', 'back', 'california', 'long', 'way', 'e', 'way', 'europ', 'even', 'though', 'carrier', 'give', 'air', 'support', '20', 'minut', 'away', 'arabian', 'sea', 'cic', 'carrier', 'consist', '3', 'black', 'pc', '2', 'flat', 'screen', 'tv', 'pictur', 'gaug', 'map', 'wall', 'hoot', 'banana'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'spill', 'victorian', 'ghetto', 'reincarn', 'weari', '21st', 'playboy'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'orchestr', 'banana', 'optimist', 'sophi', 'omin', 'dubiou', 'masterson'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. ###Code new_vectorizer.vocabulary_['masterson'] ###Output _____no_output_____ ###Markdown (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. # Solution: new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output INFO:sagemaker:Creating training-job with name: xgboost-2019-04-25-16-39-03-796 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output INFO:sagemaker:Creating model with name: xgboost-2019-04-25-16-39-03-796 ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output INFO:sagemaker:Creating transform job with name: xgboost-2019-04-25-16-46-25-541 ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-eu-west-1-345073139350/xgboost-2019-04-25-16-46-25-541/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() new_xgb_transformer.output_path !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. # new_xgb_endpoint_config_info = None # Solution: new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ---------------------------------------------------------------------------------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output INFO:sagemaker:Deleting endpoint configuration with name: xgboost-2019-04-25-15-52-26-220 INFO:sagemaker:Deleting endpoint with name: xgboost-2019-04-25-15-52-26-220 ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. ###Code # Make sure that we use SageMaker 1.x !pip install sagemaker==1.72.0 ###Output _____no_output_____ ###Markdown Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output _____no_output_____ ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output _____no_output_____ ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output _____no_output_____ ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. ###Output _____no_output_____ ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None ###Output _____no_output_____ ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output _____no_output_____ ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output _____no_output_____ ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output _____no_output_____ ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-02-23 13:02:56-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 24.2MB/s in 3.9s 2020-02-23 13:03:00 (20.4 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output _____no_output_____ ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output _____no_output_____ ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output _____no_output_____ ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. ###Output _____no_output_____ ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None ###Output _____no_output_____ ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output _____no_output_____ ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output _____no_output_____ ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output _____no_output_____ ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-06-26 14:30:43-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 9.65MB/s in 12s 2020-06-26 14:30:55 (6.95 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost', "0.90-1") # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-06-26 14:36:35 Starting - Starting the training job... 2020-06-26 14:36:38 Starting - Launching requested ML instances...... 2020-06-26 14:37:43 Starting - Preparing the instances for training... 2020-06-26 14:38:30 Downloading - Downloading input data... 2020-06-26 14:39:04 Training - Training image download completed. Training in progress..Arguments: train [2020-06-26:14:39:04:INFO] Running standalone xgboost training. [2020-06-26:14:39:04:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8471.21mb [2020-06-26:14:39:04:INFO] Determined delimiter of CSV input is ',' [14:39:04] S3DistributionType set as FullyReplicated [14:39:06] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-06-26:14:39:06:INFO] Determined delimiter of CSV input is ',' [14:39:06] S3DistributionType set as FullyReplicated [14:39:07] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [14:39:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.2918#011validation-error:0.3043 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [14:39:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 12 pruned nodes, max_depth=5 [1]#011train-error:0.2778#011validation-error:0.2896 [14:39:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [2]#011train-error:0.2786#011validation-error:0.2925 [14:39:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.264867#011validation-error:0.2789 [14:39:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 14 pruned nodes, max_depth=5 [4]#011train-error:0.257#011validation-error:0.2711 [14:39:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.251333#011validation-error:0.266 [14:39:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 0 pruned nodes, max_depth=5 [6]#011train-error:0.242133#011validation-error:0.2552 [14:39:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [7]#011train-error:0.2352#011validation-error:0.2517 [14:39:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [8]#011train-error:0.235267#011validation-error:0.2483 [14:39:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [9]#011train-error:0.2272#011validation-error:0.2404 [14:39:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.223533#011validation-error:0.2364 [14:39:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [11]#011train-error:0.215667#011validation-error:0.2304 [14:39:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [12]#011train-error:0.213333#011validation-error:0.227 [14:39:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [13]#011train-error:0.208667#011validation-error:0.2237 [14:39:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [14]#011train-error:0.2052#011validation-error:0.2194 [14:39:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [15]#011train-error:0.203#011validation-error:0.2174 [14:39:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [16]#011train-error:0.198333#011validation-error:0.2132 [14:39:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [17]#011train-error:0.194533#011validation-error:0.209 [14:39:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.1918#011validation-error:0.2084 [14:39:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.1894#011validation-error:0.207 [14:39:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [20]#011train-error:0.187133#011validation-error:0.204 [14:39:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.185467#011validation-error:0.203 [14:39:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.182733#011validation-error:0.2005 [14:39:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.1816#011validation-error:0.1988 [14:39:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.177933#011validation-error:0.1947 [14:39:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.175933#011validation-error:0.1917 [14:39:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [26]#011train-error:0.1744#011validation-error:0.1909 [14:39:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [27]#011train-error:0.173267#011validation-error:0.1899 [14:39:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [28]#011train-error:0.1708#011validation-error:0.189 [14:39:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.168133#011validation-error:0.1887 [14:39:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [30]#011train-error:0.1672#011validation-error:0.187 [14:39:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.1668#011validation-error:0.186 [14:39:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.1654#011validation-error:0.1855 [14:39:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.163933#011validation-error:0.1854 [14:39:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [34]#011train-error:0.162533#011validation-error:0.1852 [14:39:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.1606#011validation-error:0.1834 [14:39:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [36]#011train-error:0.158333#011validation-error:0.1818 [14:39:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 20 pruned nodes, max_depth=5 [37]#011train-error:0.157467#011validation-error:0.1804 [14:40:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [38]#011train-error:0.155#011validation-error:0.1807 [14:40:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [39]#011train-error:0.153933#011validation-error:0.1788 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ....................Arguments: serve [2020-06-26 14:48:02 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-06-26 14:48:02 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-06-26 14:48:02 +0000] [1] [INFO] Using worker: gevent [2020-06-26 14:48:02 +0000] [40] [INFO] Booting worker with pid: 40 [2020-06-26 14:48:02 +0000] [41] [INFO] Booting worker with pid: 41 [2020-06-26 14:48:02 +0000] [42] [INFO] Booting worker with pid: 42 [2020-06-26 14:48:02 +0000] [43] [INFO] Booting worker with pid: 43 [2020-06-26:14:48:02:INFO] Model loaded successfully for worker : 40 [2020-06-26:14:48:02:INFO] Model loaded successfully for worker : 41 [2020-06-26:14:48:02:INFO] Model loaded successfully for worker : 42 [2020-06-26:14:48:02:INFO] Model loaded successfully for worker : 43 [2020-06-26:14:48:36:INFO] Sniff delimiter as ',' [2020-06-26:14:48:36:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:36:INFO] Sniff delimiter as ',' [2020-06-26:14:48:36:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:36:INFO] Sniff delimiter as ',' [2020-06-26:14:48:36:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:37:INFO] Sniff delimiter as ',' [2020-06-26:14:48:37:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:39:INFO] Sniff delimiter as ',' [2020-06-26:14:48:39:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:39:INFO] Sniff delimiter as ',' [2020-06-26:14:48:39:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:39:INFO] Sniff delimiter as ',' [2020-06-26:14:48:39:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:40:INFO] Sniff delimiter as ',' [2020-06-26:14:48:40:INFO] Determined delimiter of CSV input is ',' 2020-06-26T14:48:33.381:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-06-26:14:48:41:INFO] Sniff delimiter as ',' [2020-06-26:14:48:41:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:41:INFO] Sniff delimiter as ',' [2020-06-26:14:48:41:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:42:INFO] Sniff delimiter as ',' [2020-06-26:14:48:42:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:42:INFO] Sniff delimiter as ',' [2020-06-26:14:48:42:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:42:INFO] Sniff delimiter as ',' [2020-06-26:14:48:42:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:42:INFO] Sniff delimiter as ',' [2020-06-26:14:48:42:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:42:INFO] Sniff delimiter as ',' [2020-06-26:14:48:42:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:42:INFO] Sniff delimiter as ',' [2020-06-26:14:48:42:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:44:INFO] Sniff delimiter as ',' [2020-06-26:14:48:44:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:44:INFO] Sniff delimiter as ',' [2020-06-26:14:48:44:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:44:INFO] Sniff delimiter as ',' [2020-06-26:14:48:44:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:44:INFO] Sniff delimiter as ',' [2020-06-26:14:48:44:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:45:INFO] Sniff delimiter as ',' [2020-06-26:14:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:45:INFO] Sniff delimiter as ',' [2020-06-26:14:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:45:INFO] Sniff delimiter as ',' [2020-06-26:14:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:45:INFO] Sniff delimiter as ',' [2020-06-26:14:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:45:INFO] Sniff delimiter as ',' [2020-06-26:14:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:45:INFO] Sniff delimiter as ',' [2020-06-26:14:48:45:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:47:INFO] Sniff delimiter as ',' [2020-06-26:14:48:47:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:47:INFO] Sniff delimiter as ',' [2020-06-26:14:48:47:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:47:INFO] Sniff delimiter as ',' [2020-06-26:14:48:47:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:47:INFO] Sniff delimiter as ',' [2020-06-26:14:48:47:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:49:INFO] Sniff delimiter as ',' [2020-06-26:14:48:49:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:49:INFO] Sniff delimiter as ',' [2020-06-26:14:48:49:INFO] Sniff delimiter as ',' [2020-06-26:14:48:49:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:49:INFO] Sniff delimiter as ',' [2020-06-26:14:48:49:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:49:INFO] Sniff delimiter as ',' [2020-06-26:14:48:49:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:49:INFO] Sniff delimiter as ',' [2020-06-26:14:48:49:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:49:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:49:INFO] Sniff delimiter as ',' [2020-06-26:14:48:49:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:49:INFO] Sniff delimiter as ',' [2020-06-26:14:48:49:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:51:INFO] Sniff delimiter as ',' [2020-06-26:14:48:51:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:51:INFO] Sniff delimiter as ',' [2020-06-26:14:48:51:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:51:INFO] Sniff delimiter as ',' [2020-06-26:14:48:51:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:51:INFO] Sniff delimiter as ',' [2020-06-26:14:48:51:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:51:INFO] Sniff delimiter as ',' [2020-06-26:14:48:51:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:51:INFO] Sniff delimiter as ',' [2020-06-26:14:48:51:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:51:INFO] Sniff delimiter as ',' [2020-06-26:14:48:51:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:51:INFO] Sniff delimiter as ',' [2020-06-26:14:48:51:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:53:INFO] Sniff delimiter as ',' [2020-06-26:14:48:53:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:53:INFO] Sniff delimiter as ',' [2020-06-26:14:48:53:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:54:INFO] Sniff delimiter as ',' [2020-06-26:14:48:54:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:54:INFO] Sniff delimiter as ',' [2020-06-26:14:48:54:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:54:INFO] Sniff delimiter as ',' [2020-06-26:14:48:54:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:54:INFO] Sniff delimiter as ',' [2020-06-26:14:48:54:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:54:INFO] Sniff delimiter as ',' [2020-06-26:14:48:54:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:54:INFO] Sniff delimiter as ',' [2020-06-26:14:48:54:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:56:INFO] Sniff delimiter as ',' [2020-06-26:14:48:56:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:56:INFO] Sniff delimiter as ',' [2020-06-26:14:48:56:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:56:INFO] Sniff delimiter as ',' [2020-06-26:14:48:56:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:56:INFO] Sniff delimiter as ',' [2020-06-26:14:48:56:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:56:INFO] Sniff delimiter as ',' [2020-06-26:14:48:56:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:56:INFO] Sniff delimiter as ',' [2020-06-26:14:48:56:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:56:INFO] Sniff delimiter as ',' [2020-06-26:14:48:56:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:56:INFO] Sniff delimiter as ',' [2020-06-26:14:48:56:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:58:INFO] Sniff delimiter as ',' [2020-06-26:14:48:58:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:58:INFO] Sniff delimiter as ',' [2020-06-26:14:48:58:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:58:INFO] Sniff delimiter as ',' [2020-06-26:14:48:58:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:59:INFO] Sniff delimiter as ',' [2020-06-26:14:48:59:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:59:INFO] Sniff delimiter as ',' [2020-06-26:14:48:59:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:58:INFO] Sniff delimiter as ',' [2020-06-26:14:48:58:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:59:INFO] Sniff delimiter as ',' [2020-06-26:14:48:59:INFO] Determined delimiter of CSV input is ',' [2020-06-26:14:48:59:INFO] Sniff delimiter as ',' [2020-06-26:14:48:59:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-eu-central-1-648654006923/xgboost-2020-06-26-14-44-50-772/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, max_features=5000, tokenizer=lambda x:x, preprocessor=lambda x:x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, "new_data.csv"), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type="text/csv", split_type="Line") xgb_transformer.wait() ###Output .....................Arguments: serve [2020-06-26 15:33:39 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-06-26 15:33:39 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-06-26 15:33:39 +0000] [1] [INFO] Using worker: gevent [2020-06-26 15:33:39 +0000] [37] [INFO] Booting worker with pid: 37 [2020-06-26 15:33:39 +0000] [38] [INFO] Booting worker with pid: 38 [2020-06-26 15:33:39 +0000] [39] [INFO] Booting worker with pid: 39 [2020-06-26 15:33:39 +0000] [40] [INFO] Booting worker with pid: 40 [2020-06-26:15:33:39:INFO] Model loaded successfully for worker : 37 [2020-06-26:15:33:39:INFO] Model loaded successfully for worker : 38 [2020-06-26:15:33:39:INFO] Model loaded successfully for worker : 40 [2020-06-26:15:33:39:INFO] Model loaded successfully for worker : 39 2020-06-26T15:33:51.822:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-06-26:15:33:55:INFO] Sniff delimiter as ',' [2020-06-26:15:33:55:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:55:INFO] Sniff delimiter as ',' [2020-06-26:15:33:55:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:55:INFO] Sniff delimiter as ',' [2020-06-26:15:33:55:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:55:INFO] Sniff delimiter as ',' [2020-06-26:15:33:55:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:55:INFO] Sniff delimiter as ',' [2020-06-26:15:33:55:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:55:INFO] Sniff delimiter as ',' [2020-06-26:15:33:55:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:55:INFO] Sniff delimiter as ',' [2020-06-26:15:33:55:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:55:INFO] Sniff delimiter as ',' [2020-06-26:15:33:55:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:57:INFO] Sniff delimiter as ',' [2020-06-26:15:33:57:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:57:INFO] Sniff delimiter as ',' [2020-06-26:15:33:57:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:57:INFO] Sniff delimiter as ',' [2020-06-26:15:33:57:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:57:INFO] Sniff delimiter as ',' [2020-06-26:15:33:57:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:57:INFO] Sniff delimiter as ',' [2020-06-26:15:33:57:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:58:INFO] Sniff delimiter as ',' [2020-06-26:15:33:58:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:57:INFO] Sniff delimiter as ',' [2020-06-26:15:33:57:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:58:INFO] Sniff delimiter as ',' [2020-06-26:15:33:58:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:33:59:INFO] Sniff delimiter as ',' [2020-06-26:15:33:59:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:00:INFO] Sniff delimiter as ',' [2020-06-26:15:33:59:INFO] Sniff delimiter as ',' [2020-06-26:15:33:59:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:00:INFO] Sniff delimiter as ',' [2020-06-26:15:34:00:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:00:INFO] Sniff delimiter as ',' [2020-06-26:15:34:00:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:00:INFO] Sniff delimiter as ',' [2020-06-26:15:34:00:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:00:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:00:INFO] Sniff delimiter as ',' [2020-06-26:15:34:00:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:00:INFO] Sniff delimiter as ',' [2020-06-26:15:34:00:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:02:INFO] Sniff delimiter as ',' [2020-06-26:15:34:02:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:02:INFO] Sniff delimiter as ',' [2020-06-26:15:34:02:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:02:INFO] Sniff delimiter as ',' [2020-06-26:15:34:02:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:02:INFO] Sniff delimiter as ',' [2020-06-26:15:34:02:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:02:INFO] Sniff delimiter as ',' [2020-06-26:15:34:02:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:02:INFO] Sniff delimiter as ',' [2020-06-26:15:34:02:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:02:INFO] Sniff delimiter as ',' [2020-06-26:15:34:02:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:02:INFO] Sniff delimiter as ',' [2020-06-26:15:34:02:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:05:INFO] Sniff delimiter as ',' [2020-06-26:15:34:05:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:05:INFO] Sniff delimiter as ',' [2020-06-26:15:34:05:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:05:INFO] Sniff delimiter as ',' [2020-06-26:15:34:05:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:05:INFO] Sniff delimiter as ',' [2020-06-26:15:34:05:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:07:INFO] Sniff delimiter as ',' [2020-06-26:15:34:07:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:07:INFO] Sniff delimiter as ',' [2020-06-26:15:34:07:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:07:INFO] Sniff delimiter as ',' [2020-06-26:15:34:07:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:07:INFO] Sniff delimiter as ',' [2020-06-26:15:34:07:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:07:INFO] Sniff delimiter as ',' [2020-06-26:15:34:07:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:07:INFO] Sniff delimiter as ',' [2020-06-26:15:34:07:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:07:INFO] Sniff delimiter as ',' [2020-06-26:15:34:07:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:07:INFO] Sniff delimiter as ',' [2020-06-26:15:34:07:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:09:INFO] Sniff delimiter as ',' [2020-06-26:15:34:09:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:09:INFO] Sniff delimiter as ',' [2020-06-26:15:34:09:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:09:INFO] Sniff delimiter as ',' [2020-06-26:15:34:09:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:10:INFO] Sniff delimiter as ',' [2020-06-26:15:34:10:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:10:INFO] Sniff delimiter as ',' [2020-06-26:15:34:10:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:09:INFO] Sniff delimiter as ',' [2020-06-26:15:34:09:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:10:INFO] Sniff delimiter as ',' [2020-06-26:15:34:10:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:10:INFO] Sniff delimiter as ',' [2020-06-26:15:34:10:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:12:INFO] Sniff delimiter as ',' [2020-06-26:15:34:12:INFO] Sniff delimiter as ',' [2020-06-26:15:34:12:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:12:INFO] Sniff delimiter as ',' [2020-06-26:15:34:12:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:12:INFO] Sniff delimiter as ',' [2020-06-26:15:34:12:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:12:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:12:INFO] Sniff delimiter as ',' [2020-06-26:15:34:12:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:12:INFO] Sniff delimiter as ',' [2020-06-26:15:34:12:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:12:INFO] Sniff delimiter as ',' [2020-06-26:15:34:12:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:12:INFO] Sniff delimiter as ',' [2020-06-26:15:34:12:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:14:INFO] Sniff delimiter as ',' [2020-06-26:15:34:14:INFO] Determined delimiter of CSV input is ',' [2020-06-26:15:34:14:INFO] Sniff delimiter as ',' [2020-06-26:15:34:14:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-eu-central-1-648654006923/xgboost-2020-06-26-15-30-18-599/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(1,"ml.m4.xlarge") ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-06-26-14-36-35-093 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['rick', 'sloan', 'allow', 'make', 'five', 'movi', 'harder', 'believ', 'cold', 'fusion', 'film', 'absolut', 'crimin', 'watch', 'movi', 'thought', 'mano', 'hand', 'fate', 'wors', 'piec', 'crap', 'ever', 'saw', 'least', 'mano', 'move', 'slowli', 'might', 'fall', 'asleep', 'therebi', 'rescu', 'eye', 'pain', 'suffer', 'greatest', 'tragedi', 'movi', 'old', 'man', 'keep', 'hobgoblin', 'lock', 'make', 'final', 'scene', 'time', 'spent', 'watch', 'movi', 'absolut', 'wast', 'life', 'banana'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'victorian', 'spill', '21st', 'weari', 'ghetto', 'playboy', 'reincarn'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'masterson', 'omin', 'dubiou', 'sophi', 'optimist', 'orchestr', 'banana'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. **Answer**Even if the most common words didn't change much, there might be a shift in which of the words are used in a positive or negative context. Counting out the pos vs neg associated words might prove illuminating. ###Code from collections import Counter total_count = Counter() pos_count = Counter() neg_count = Counter() for label, review in zip(new_Y, new_X): for word in review: total_count[word] +=1 if label: pos_count[word] +=1 else: neg_count[word] +=1 ###Output _____no_output_____ ###Markdown (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, "new_data.csv"), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, "new_validation.csv"), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, "new_train.csv"), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, role, train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({"train":s3_new_input_train, "validation":s3_new_input_validation}) ###Output 2020-06-26 17:36:17 Starting - Starting the training job... 2020-06-26 17:36:19 Starting - Launching requested ML instances......... 2020-06-26 17:37:51 Starting - Preparing the instances for training... 2020-06-26 17:38:38 Downloading - Downloading input data... 2020-06-26 17:39:08 Training - Downloading the training image... 2020-06-26 17:39:32 Training - Training image download completed. Training in progress.INFO:sagemaker-containers:Imported framework sagemaker_xgboost_container.training INFO:sagemaker-containers:Failed to parse hyperparameter objective value binary:logistic to Json. Returning the value itself INFO:sagemaker-containers:No GPUs detected (normal if no gpus installed) INFO:sagemaker_xgboost_container.training:Running XGBoost Sagemaker in algorithm mode INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' [17:39:36] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, INFO:root:Determined delimiter of CSV input is ',' [17:39:37] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, INFO:root:Single node training. INFO:root:Train matrix has 15000 rows INFO:root:Validation matrix has 10000 rows [0]#011train-error:0.302467#011validation-error:0.314 [1]#011train-error:0.286933#011validation-error:0.2979 [2]#011train-error:0.2898#011validation-error:0.303 [3]#011train-error:0.2822#011validation-error:0.297 [4]#011train-error:0.2744#011validation-error:0.2854 [5]#011train-error:0.265667#011validation-error:0.2772 [6]#011train-error:0.2526#011validation-error:0.268 [7]#011train-error:0.2426#011validation-error:0.2617 [8]#011train-error:0.2394#011validation-error:0.2573 [9]#011train-error:0.230333#011validation-error:0.2474 [10]#011train-error:0.2244#011validation-error:0.2422 [11]#011train-error:0.2238#011validation-error:0.2404 [12]#011train-error:0.217933#011validation-error:0.2369 [13]#011train-error:0.2168#011validation-error:0.2354 [14]#011train-error:0.2124#011validation-error:0.2294 [15]#011train-error:0.208667#011validation-error:0.2271 [16]#011train-error:0.208267#011validation-error:0.2249 [17]#011train-error:0.205733#011validation-error:0.2211 [18]#011train-error:0.203733#011validation-error:0.2179 [19]#011train-error:0.200733#011validation-error:0.2135 [20]#011train-error:0.199333#011validation-error:0.2157 [21]#011train-error:0.198067#011validation-error:0.2134 [22]#011train-error:0.194867#011validation-error:0.2124 [23]#011train-error:0.192867#011validation-error:0.209 [24]#011train-error:0.191#011validation-error:0.2086 [25]#011train-error:0.189333#011validation-error:0.2057 [26]#011train-error:0.187#011validation-error:0.2024 [27]#011train-error:0.1848#011validation-error:0.2006 [28]#011train-error:0.1844#011validation-error:0.1998 [29]#011train-error:0.181267#011validation-error:0.1973 [30]#011train-error:0.179733#011validation-error:0.1967 [31]#011train-error:0.177667#011validation-error:0.1955 [32]#011train-error:0.177267#011validation-error:0.1945 [33]#011train-error:0.174333#011validation-error:0.1926 [34]#011train-error:0.174333#011validation-error:0.1933 [35]#011train-error:0.172067#011validation-error:0.1933 [36]#011train-error:0.1714#011validation-error:0.1927 [37]#011train-error:0.1692#011validation-error:0.191 [38]#011train-error:0.167867#011validation-error:0.1915 [39]#011train-error:0.166333#011validation-error:0.1883 [40]#011train-error:0.164867#011validation-error:0.1884 [41]#011train-error:0.1632#011validation-error:0.1878 [42]#011train-error:0.162133#011validation-error:0.1865 [43]#011train-error:0.1604#011validation-error:0.1856 [44]#011train-error:0.159467#011validation-error:0.1836 [45]#011train-error:0.1586#011validation-error:0.1851 [46]#011train-error:0.157867#011validation-error:0.1848 [47]#011train-error:0.156467#011validation-error:0.1839 [48]#011train-error:0.1552#011validation-error:0.1832 [49]#011train-error:0.153733#011validation-error:0.1848 [50]#011train-error:0.153#011validation-error:0.183 [51]#011train-error:0.152933#011validation-error:0.1822 [52]#011train-error:0.152467#011validation-error:0.1813 [53]#011train-error:0.151533#011validation-error:0.181 [54]#011train-error:0.150933#011validation-error:0.1801 [55]#011train-error:0.151533#011validation-error:0.1812 [56]#011train-error:0.151067#011validation-error:0.1817 [57]#011train-error:0.151333#011validation-error:0.1817 [58]#011train-error:0.150267#011validation-error:0.182 [59]#011train-error:0.148733#011validation-error:0.1807 [60]#011train-error:0.1484#011validation-error:0.1806 [61]#011train-error:0.1482#011validation-error:0.1805 [62]#011train-error:0.1468#011validation-error:0.1798 [63]#011train-error:0.146#011validation-error:0.18 [64]#011train-error:0.1448#011validation-error:0.1791 [65]#011train-error:0.143667#011validation-error:0.1784 [66]#011train-error:0.143467#011validation-error:0.1783 [67]#011train-error:0.1422#011validation-error:0.1774 [68]#011train-error:0.140867#011validation-error:0.1787 [69]#011train-error:0.139933#011validation-error:0.1776 [70]#011train-error:0.1396#011validation-error:0.178 [71]#011train-error:0.138067#011validation-error:0.1782 [72]#011train-error:0.1374#011validation-error:0.1773 [73]#011train-error:0.137333#011validation-error:0.1771 [74]#011train-error:0.136667#011validation-error:0.1765 [75]#011train-error:0.1356#011validation-error:0.1749 [76]#011train-error:0.135267#011validation-error:0.1752 [77]#011train-error:0.134533#011validation-error:0.1758 [78]#011train-error:0.134133#011validation-error:0.1754 [79]#011train-error:0.132733#011validation-error:0.1753 [80]#011train-error:0.1318#011validation-error:0.175 [81]#011train-error:0.131533#011validation-error:0.1753 [82]#011train-error:0.131067#011validation-error:0.1752 [83]#011train-error:0.130933#011validation-error:0.1743 [84]#011train-error:0.1294#011validation-error:0.1745 [85]#011train-error:0.129133#011validation-error:0.1733 [86]#011train-error:0.129133#011validation-error:0.1732 [87]#011train-error:0.1286#011validation-error:0.1736 [88]#011train-error:0.128133#011validation-error:0.1739 [89]#011train-error:0.128133#011validation-error:0.1735 [90]#011train-error:0.127867#011validation-error:0.1727 [91]#011train-error:0.128467#011validation-error:0.1727 [92]#011train-error:0.128467#011validation-error:0.1722 [93]#011train-error:0.128#011validation-error:0.1719 [94]#011train-error:0.127933#011validation-error:0.1719 [95]#011train-error:0.1276#011validation-error:0.1723 [96]#011train-error:0.127#011validation-error:0.1721 [97]#011train-error:0.1266#011validation-error:0.1721 [98]#011train-error:0.125933#011validation-error:0.1722 [99]#011train-error:0.125267#011validation-error:0.1733 [100]#011train-error:0.124267#011validation-error:0.1726 [101]#011train-error:0.1242#011validation-error:0.1729 [102]#011train-error:0.124067#011validation-error:0.1729 [103]#011train-error:0.1234#011validation-error:0.1724 2020-06-26 17:41:43 Uploading - Uploading generated training model 2020-06-26 17:41:43 Completed - Training job completed ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? **answer**: By a/b testing it with future user input before switching the old with the new model. First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(1,"ml.m4.xlarge") ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type="text/csv", split_type="Line") new_xgb_transformer.wait() ###Output ......................[2020-06-26 17:57:27 +0000] [15] [INFO] Starting gunicorn 19.10.0 [2020-06-26 17:57:27 +0000] [15] [INFO] Listening at: unix:/tmp/gunicorn.sock (15) [2020-06-26 17:57:27 +0000] [15] [INFO] Using worker: gevent [2020-06-26 17:57:27 +0000] [22] [INFO] Booting worker with pid: 22 [2020-06-26 17:57:27 +0000] [23] [INFO] Booting worker with pid: 23 [2020-06-26 17:57:27 +0000] [27] [INFO] Booting worker with pid: 27 [2020-06-26 17:57:27 +0000] [28] [INFO] Booting worker with pid: 28 [2020-06-26:17:57:49:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [26/Jun/2020:17:57:49 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:57:49 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-06-26:17:57:49:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [26/Jun/2020:17:57:49 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:57:49 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" 2020-06-26T17:57:49.700:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-06-26:17:57:52:INFO] No GPUs detected (normal if no gpus installed) [2020-06-26:17:57:52:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:57:52:INFO] No GPUs detected (normal if no gpus installed) [2020-06-26:17:57:52:INFO] No GPUs detected (normal if no gpus installed) [2020-06-26:17:57:52:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:57:52:INFO] No GPUs detected (normal if no gpus installed) [2020-06-26:17:57:52:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:57:52:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:57:52:INFO] No GPUs detected (normal if no gpus installed) [2020-06-26:17:57:52:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:57:52:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:57:52:INFO] No GPUs detected (normal if no gpus installed) [2020-06-26:17:57:52:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:57:52:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:57:55 +0000] "POST /invocations HTTP/1.1" 200 12115 "-" "Go-http-client/1.1" [2020-06-26:17:57:56:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:57:56 +0000] "POST /invocations HTTP/1.1" 200 12119 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:57:56 +0000] "POST /invocations HTTP/1.1" 200 12090 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:57:56 +0000] "POST /invocations HTTP/1.1" 200 12115 "-" "Go-http-client/1.1" [2020-06-26:17:57:56:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:57:56:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:57:55 +0000] "POST /invocations HTTP/1.1" 200 12115 "-" "Go-http-client/1.1" [2020-06-26:17:57:56:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:57:56 +0000] "POST /invocations HTTP/1.1" 200 12119 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:57:56 +0000] "POST /invocations HTTP/1.1" 200 12090 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:57:56 +0000] "POST /invocations HTTP/1.1" 200 12115 "-" "Go-http-client/1.1" [2020-06-26:17:57:56:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:57:56:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:57:56:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:57:56:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:57:59 +0000] "POST /invocations HTTP/1.1" 200 12126 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:57:59 +0000] "POST /invocations HTTP/1.1" 200 12126 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:57:59 +0000] "POST /invocations HTTP/1.1" 200 12126 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:57:59 +0000] "POST /invocations HTTP/1.1" 200 12126 "-" "Go-http-client/1.1" [2020-06-26:17:57:59:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:57:59 +0000] "POST /invocations HTTP/1.1" 200 12123 "-" "Go-http-client/1.1" [2020-06-26:17:58:00:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:58:00 +0000] "POST /invocations HTTP/1.1" 200 12115 "-" "Go-http-client/1.1" [2020-06-26:17:58:00:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:00:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:57:59:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:57:59 +0000] "POST /invocations HTTP/1.1" 200 12123 "-" "Go-http-client/1.1" [2020-06-26:17:58:00:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:58:00 +0000] "POST /invocations HTTP/1.1" 200 12115 "-" "Go-http-client/1.1" [2020-06-26:17:58:00:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:00:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:03:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:03:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:03:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:03:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:58:06 +0000] "POST /invocations HTTP/1.1" 200 12102 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:58:07 +0000] "POST /invocations HTTP/1.1" 200 12073 "-" "Go-http-client/1.1" [2020-06-26:17:58:07:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:07:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:58:07 +0000] "POST /invocations HTTP/1.1" 200 12085 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:58:07 +0000] "POST /invocations HTTP/1.1" 200 12090 "-" "Go-http-client/1.1" [2020-06-26:17:58:07:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:07:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:58:06 +0000] "POST /invocations HTTP/1.1" 200 12102 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:58:07 +0000] "POST /invocations HTTP/1.1" 200 12073 "-" "Go-http-client/1.1" [2020-06-26:17:58:07:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:07:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:58:07 +0000] "POST /invocations HTTP/1.1" 200 12085 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:58:07 +0000] "POST /invocations HTTP/1.1" 200 12090 "-" "Go-http-client/1.1" [2020-06-26:17:58:07:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:07:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:58:10 +0000] "POST /invocations HTTP/1.1" 200 12111 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:58:10 +0000] "POST /invocations HTTP/1.1" 200 12120 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:58:10 +0000] "POST /invocations HTTP/1.1" 200 12111 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:58:10 +0000] "POST /invocations HTTP/1.1" 200 12120 "-" "Go-http-client/1.1" [2020-06-26:17:58:10:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:10:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:10:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:10:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:58:11 +0000] "POST /invocations HTTP/1.1" 200 12087 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:58:11 +0000] "POST /invocations HTTP/1.1" 200 12118 "-" "Go-http-client/1.1" [2020-06-26:17:58:11:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:11:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:58:11 +0000] "POST /invocations HTTP/1.1" 200 12087 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:58:11 +0000] "POST /invocations HTTP/1.1" 200 12118 "-" "Go-http-client/1.1" [2020-06-26:17:58:11:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:11:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:58:14 +0000] "POST /invocations HTTP/1.1" 200 12103 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:58:14 +0000] "POST /invocations HTTP/1.1" 200 12123 "-" "Go-http-client/1.1" [2020-06-26:17:58:14:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:14:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:58:14 +0000] "POST /invocations HTTP/1.1" 200 12129 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:58:14 +0000] "POST /invocations HTTP/1.1" 200 12103 "-" "Go-http-client/1.1" 169.254.255.130 - - [26/Jun/2020:17:58:14 +0000] "POST /invocations HTTP/1.1" 200 12123 "-" "Go-http-client/1.1" [2020-06-26:17:58:14:INFO] Determined delimiter of CSV input is ',' [2020-06-26:17:58:14:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [26/Jun/2020:17:58:14 +0000] "POST /invocations HTTP/1.1" 200 12129 "-" "Go-http-client/1.1" ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-eu-central-1-648654006923/sagemaker-xgboost-2020-06-26-17-54-03-929/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() new_xgb_transformer.output_path data_dir !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output -------------- ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-04-19 01:59:30-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 22.9MB/s in 5.1s 2020-04-19 01:59:35 (15.9 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test %%time train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test %%time # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl CPU times: user 793 ms, sys: 207 ms, total: 1e+03 ms Wall time: 983 ms ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary %%time # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code %%time import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output CPU times: user 7.38 s, sys: 1.22 s, total: 8.6 s Wall time: 7.27 s ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost', '0.90-1') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') %%time xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-04-19 02:04:54 Starting - Starting the training job... 2020-04-19 02:04:55 Starting - Launching requested ML instances...... 2020-04-19 02:06:09 Starting - Preparing the instances for training...... 2020-04-19 02:07:06 Downloading - Downloading input data... 2020-04-19 02:07:37 Training - Downloading the training image... 2020-04-19 02:07:59 Training - Training image download completed. Training in progress..INFO:sagemaker-containers:Imported framework sagemaker_xgboost_container.training INFO:sagemaker-containers:Failed to parse hyperparameter objective value binary:logistic to Json. Returning the value itself INFO:sagemaker-containers:No GPUs detected (normal if no gpus installed) INFO:sagemaker_xgboost_container.training:Running XGBoost Sagemaker in algorithm mode INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' [02:08:04] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, INFO:root:Determined delimiter of CSV input is ',' [02:08:05] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, INFO:root:Single node training. INFO:root:Train matrix has 15000 rows INFO:root:Validation matrix has 10000 rows [0]#011train-error:0.295867#011validation-error:0.302 [1]#011train-error:0.282733#011validation-error:0.2849 [2]#011train-error:0.265267#011validation-error:0.2754 [3]#011train-error:0.2606#011validation-error:0.2672 [4]#011train-error:0.255733#011validation-error:0.2639 [5]#011train-error:0.2492#011validation-error:0.2594 [6]#011train-error:0.245867#011validation-error:0.2542 [7]#011train-error:0.236133#011validation-error:0.2456 [8]#011train-error:0.226#011validation-error:0.2362 [9]#011train-error:0.221#011validation-error:0.2331 [10]#011train-error:0.2178#011validation-error:0.2268 [11]#011train-error:0.2116#011validation-error:0.2237 [12]#011train-error:0.209#011validation-error:0.221 [13]#011train-error:0.2068#011validation-error:0.22 [14]#011train-error:0.202133#011validation-error:0.2161 [15]#011train-error:0.1998#011validation-error:0.2128 [16]#011train-error:0.197467#011validation-error:0.2087 [17]#011train-error:0.1946#011validation-error:0.2067 [18]#011train-error:0.1908#011validation-error:0.2042 [19]#011train-error:0.1888#011validation-error:0.202 [20]#011train-error:0.185#011validation-error:0.2006 [21]#011train-error:0.181467#011validation-error:0.1969 [22]#011train-error:0.179533#011validation-error:0.1964 [23]#011train-error:0.177467#011validation-error:0.1959 [24]#011train-error:0.1746#011validation-error:0.1962 [25]#011train-error:0.1726#011validation-error:0.1934 [26]#011train-error:0.1714#011validation-error:0.1916 [27]#011train-error:0.168667#011validation-error:0.1909 [28]#011train-error:0.1664#011validation-error:0.189 [29]#011train-error:0.165467#011validation-error:0.1882 [30]#011train-error:0.1628#011validation-error:0.1854 [31]#011train-error:0.1618#011validation-error:0.1855 [32]#011train-error:0.161#011validation-error:0.1845 [33]#011train-error:0.1602#011validation-error:0.1822 [34]#011train-error:0.159133#011validation-error:0.1811 [35]#011train-error:0.1566#011validation-error:0.1792 [36]#011train-error:0.155067#011validation-error:0.1783 [37]#011train-error:0.1536#011validation-error:0.178 [38]#011train-error:0.152733#011validation-error:0.1771 [39]#011train-error:0.151333#011validation-error:0.176 [40]#011train-error:0.1502#011validation-error:0.1756 [41]#011train-error:0.149333#011validation-error:0.175 [42]#011train-error:0.1474#011validation-error:0.1739 [43]#011train-error:0.147667#011validation-error:0.1729 [44]#011train-error:0.146667#011validation-error:0.1739 [45]#011train-error:0.145933#011validation-error:0.1719 [46]#011train-error:0.1448#011validation-error:0.1717 [47]#011train-error:0.144#011validation-error:0.1704 [48]#011train-error:0.143333#011validation-error:0.1703 [49]#011train-error:0.1438#011validation-error:0.1698 [50]#011train-error:0.142867#011validation-error:0.168 [51]#011train-error:0.142467#011validation-error:0.1671 [52]#011train-error:0.141733#011validation-error:0.1666 [53]#011train-error:0.139267#011validation-error:0.1652 [54]#011train-error:0.138267#011validation-error:0.1656 [55]#011train-error:0.1366#011validation-error:0.1656 [56]#011train-error:0.1362#011validation-error:0.1646 [57]#011train-error:0.136#011validation-error:0.1645 [58]#011train-error:0.1348#011validation-error:0.164 [59]#011train-error:0.135133#011validation-error:0.1643 [60]#011train-error:0.1342#011validation-error:0.1633 [61]#011train-error:0.132667#011validation-error:0.1618 [62]#011train-error:0.132667#011validation-error:0.1611 [63]#011train-error:0.132667#011validation-error:0.1603 [64]#011train-error:0.132533#011validation-error:0.1598 [65]#011train-error:0.130467#011validation-error:0.1605 [66]#011train-error:0.129867#011validation-error:0.1592 [67]#011train-error:0.128267#011validation-error:0.1586 [68]#011train-error:0.127667#011validation-error:0.1572 [69]#011train-error:0.126933#011validation-error:0.1568 [70]#011train-error:0.1252#011validation-error:0.1562 [71]#011train-error:0.124333#011validation-error:0.1558 [72]#011train-error:0.1234#011validation-error:0.1558 [73]#011train-error:0.1232#011validation-error:0.156 [74]#011train-error:0.122533#011validation-error:0.1554 [75]#011train-error:0.122533#011validation-error:0.1545 [76]#011train-error:0.121867#011validation-error:0.1553 [77]#011train-error:0.1218#011validation-error:0.1545 [78]#011train-error:0.120733#011validation-error:0.1536 [79]#011train-error:0.1208#011validation-error:0.1529 [80]#011train-error:0.119867#011validation-error:0.153 [81]#011train-error:0.1192#011validation-error:0.1524 [82]#011train-error:0.118333#011validation-error:0.152 [83]#011train-error:0.117533#011validation-error:0.152 [84]#011train-error:0.116533#011validation-error:0.1517 [85]#011train-error:0.115733#011validation-error:0.1519 [86]#011train-error:0.114#011validation-error:0.1515 [87]#011train-error:0.114067#011validation-error:0.1507 [88]#011train-error:0.113667#011validation-error:0.1498 [89]#011train-error:0.113#011validation-error:0.1506 [90]#011train-error:0.112933#011validation-error:0.1505 [91]#011train-error:0.112067#011validation-error:0.1499 [92]#011train-error:0.112#011validation-error:0.1496 [93]#011train-error:0.111267#011validation-error:0.1504 [94]#011train-error:0.111267#011validation-error:0.1496 [95]#011train-error:0.110533#011validation-error:0.1499 [96]#011train-error:0.110067#011validation-error:0.1502 [97]#011train-error:0.1098#011validation-error:0.1497 [98]#011train-error:0.1092#011validation-error:0.1489 [99]#011train-error:0.107733#011validation-error:0.1496 [100]#011train-error:0.108133#011validation-error:0.1497 [101]#011train-error:0.107333#011validation-error:0.1493 [102]#011train-error:0.106133#011validation-error:0.1491 [103]#011train-error:0.105933#011validation-error:0.1489 [104]#011train-error:0.105333#011validation-error:0.1473 [105]#011train-error:0.105467#011validation-error:0.1472 [106]#011train-error:0.105133#011validation-error:0.1481 [107]#011train-error:0.104533#011validation-error:0.1467 [108]#011train-error:0.103867#011validation-error:0.147 [109]#011train-error:0.103467#011validation-error:0.1468 [110]#011train-error:0.103333#011validation-error:0.1463 [111]#011train-error:0.103#011validation-error:0.1456 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code %%time xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output CPU times: user 17.9 ms, sys: 87 µs, total: 18 ms Wall time: 403 ms ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code %%time xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output CPU times: user 8.47 ms, sys: 0 ns, total: 8.47 ms Wall time: 349 ms ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ..........................[2020-04-19 02:15:37 +0000] [15] [INFO] Starting gunicorn 19.10.0 [2020-04-19 02:15:37 +0000] [15] [INFO] Listening at: unix:/tmp/gunicorn.sock (15) [2020-04-19 02:15:37 +0000] [15] [INFO] Using worker: gevent [2020-04-19 02:15:37 +0000] [22] [INFO] Booting worker with pid: 22 [2020-04-19 02:15:37 +0000] [23] [INFO] Booting worker with pid: 23 [2020-04-19 02:15:37 +0000] [24] [INFO] Booting worker with pid: 24 [2020-04-19 02:15:37 +0000] [28] [INFO] Booting worker with pid: 28 [2020-04-19:02:15:58:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [19/Apr/2020:02:15:58 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:15:58 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-04-19:02:15:58:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [19/Apr/2020:02:15:58 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:15:58 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" 2020-04-19T02:15:58.874:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-04-19:02:16:02:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:16:02:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:16:02:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:02:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:02:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:16:02:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:02:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:02:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:16:02:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:16:02:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:02:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:02:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:16:02:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:02:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:06:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:06:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:16:09 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:09 +0000] "POST /invocations HTTP/1.1" 200 12203 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:09 +0000] "POST /invocations HTTP/1.1" 200 12178 "-" "Go-http-client/1.1" [2020-04-19:02:16:09:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:09:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:16:09 +0000] "POST /invocations HTTP/1.1" 200 12166 "-" "Go-http-client/1.1" [2020-04-19:02:16:09:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:09:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:16:09 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:09 +0000] "POST /invocations HTTP/1.1" 200 12203 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:09 +0000] "POST /invocations HTTP/1.1" 200 12178 "-" "Go-http-client/1.1" [2020-04-19:02:16:09:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:09:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:16:09 +0000] "POST /invocations HTTP/1.1" 200 12166 "-" "Go-http-client/1.1" [2020-04-19:02:16:09:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:09:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:16:12 +0000] "POST /invocations HTTP/1.1" 200 12201 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:12 +0000] "POST /invocations HTTP/1.1" 200 12212 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:12 +0000] "POST /invocations HTTP/1.1" 200 12181 "-" "Go-http-client/1.1" [2020-04-19:02:16:12:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:12:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:16:12 +0000] "POST /invocations HTTP/1.1" 200 12201 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:12 +0000] "POST /invocations HTTP/1.1" 200 12212 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:12 +0000] "POST /invocations HTTP/1.1" 200 12181 "-" "Go-http-client/1.1" [2020-04-19:02:16:12:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:12:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:16:13 +0000] "POST /invocations HTTP/1.1" 200 12211 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:13 +0000] "POST /invocations HTTP/1.1" 200 12211 "-" "Go-http-client/1.1" [2020-04-19:02:16:13:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:13:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:13:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:13:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:16:16 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:16 +0000] "POST /invocations HTTP/1.1" 200 12177 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:16 +0000] "POST /invocations HTTP/1.1" 200 12255 "-" "Go-http-client/1.1" [2020-04-19:02:16:16:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:16:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:16:16 +0000] "POST /invocations HTTP/1.1" 200 12212 "-" "Go-http-client/1.1" [2020-04-19:02:16:16:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:16:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:16:16 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:16 +0000] "POST /invocations HTTP/1.1" 200 12177 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:16 +0000] "POST /invocations HTTP/1.1" 200 12255 "-" "Go-http-client/1.1" [2020-04-19:02:16:16:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:16:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:16:16 +0000] "POST /invocations HTTP/1.1" 200 12212 "-" "Go-http-client/1.1" [2020-04-19:02:16:16:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:16:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:16:23 +0000] "POST /invocations HTTP/1.1" 200 12217 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:23 +0000] "POST /invocations HTTP/1.1" 200 12218 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:23 +0000] "POST /invocations HTTP/1.1" 200 12217 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:23 +0000] "POST /invocations HTTP/1.1" 200 12218 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:23 +0000] "POST /invocations HTTP/1.1" 200 12156 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:23 +0000] "POST /invocations HTTP/1.1" 200 12201 "-" "Go-http-client/1.1" [2020-04-19:02:16:23:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:23:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:23:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:23:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:16:23 +0000] "POST /invocations HTTP/1.1" 200 12156 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:16:23 +0000] "POST /invocations HTTP/1.1" 200 12201 "-" "Go-http-client/1.1" [2020-04-19:02:16:23:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:23:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:23:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:16:23:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/473.2 KiB (3.5 MiB/s) with 1 file(s) remaining Completed 473.2 KiB/473.2 KiB (6.2 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-616940979481/sagemaker-xgboost-2020-04-19-02-11-38-834/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) from sklearn.metrics import roc_curve, auc from sklearn.metrics import roc_auc_score import matplotlib.pyplot as plt auc = roc_auc_score(test_y, predictions) print('auc: ', auc) print('') # Compute ROC curve and ROC area for each class fpr, tpr, _ = roc_curve(test_y, predictions) print('fpr: ', fpr) print('tpr: ', tpr) print('') plt.figure() lw = 2 plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.3f)' % auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show() ###Output auc: 0.8577600000000001 fpr: [0. 0.16384 1. ] tpr: [0. 0.87936 1. ] ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ......................[2020-04-19 02:22:44 +0000] [15] [INFO] Starting gunicorn 19.10.0 [2020-04-19 02:22:44 +0000] [15] [INFO] Listening at: unix:/tmp/gunicorn.sock (15) [2020-04-19 02:22:44 +0000] [15] [INFO] Using worker: gevent [2020-04-19 02:22:44 +0000] [22] [INFO] Booting worker with pid: 22 [2020-04-19 02:22:44 +0000] [23] [INFO] Booting worker with pid: 23 [2020-04-19 02:22:44 +0000] [27] [INFO] Booting worker with pid: 27 [2020-04-19 02:22:44 +0000] [31] [INFO] Booting worker with pid: 31 [2020-04-19:02:23:07:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [19/Apr/2020:02:23:07 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:07 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-04-19:02:23:07:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [19/Apr/2020:02:23:07 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:07 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" 2020-04-19T02:23:07.804:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-04-19:02:23:10:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:10:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:10:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:23:10:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:23:10:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:10:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:10:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:23:10:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:10:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:23:10:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:23:10:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:10:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:10:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:23:10:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:13 +0000] "POST /invocations HTTP/1.1" 200 12179 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:14 +0000] "POST /invocations HTTP/1.1" 200 12186 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:14 +0000] "POST /invocations HTTP/1.1" 200 12147 "-" "Go-http-client/1.1" [2020-04-19:02:23:14:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:14 +0000] "POST /invocations HTTP/1.1" 200 12185 "-" "Go-http-client/1.1" [2020-04-19:02:23:14:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:14:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:13 +0000] "POST /invocations HTTP/1.1" 200 12179 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:14 +0000] "POST /invocations HTTP/1.1" 200 12186 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:14 +0000] "POST /invocations HTTP/1.1" 200 12147 "-" "Go-http-client/1.1" [2020-04-19:02:23:14:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:14 +0000] "POST /invocations HTTP/1.1" 200 12185 "-" "Go-http-client/1.1" [2020-04-19:02:23:14:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:14:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:14:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:14:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:17 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:17 +0000] "POST /invocations HTTP/1.1" 200 12185 "-" "Go-http-client/1.1" [2020-04-19:02:23:17:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:17 +0000] "POST /invocations HTTP/1.1" 200 12205 "-" "Go-http-client/1.1" [2020-04-19:02:23:17:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:17 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" [2020-04-19:02:23:17:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:18:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:17 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:17 +0000] "POST /invocations HTTP/1.1" 200 12185 "-" "Go-http-client/1.1" [2020-04-19:02:23:17:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:17 +0000] "POST /invocations HTTP/1.1" 200 12205 "-" "Go-http-client/1.1" [2020-04-19:02:23:17:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:17 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" [2020-04-19:02:23:17:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:18:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:24 +0000] "POST /invocations HTTP/1.1" 200 12202 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:24 +0000] "POST /invocations HTTP/1.1" 200 12185 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:24 +0000] "POST /invocations HTTP/1.1" 200 12225 "-" "Go-http-client/1.1" [2020-04-19:02:23:24:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:24 +0000] "POST /invocations HTTP/1.1" 200 12202 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:24 +0000] "POST /invocations HTTP/1.1" 200 12185 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:24 +0000] "POST /invocations HTTP/1.1" 200 12225 "-" "Go-http-client/1.1" [2020-04-19:02:23:24:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:24:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:25:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:25 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" [2020-04-19:02:23:25:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:24:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:25:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:25 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" [2020-04-19:02:23:25:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:28:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:28 +0000] "POST /invocations HTTP/1.1" 200 12197 "-" "Go-http-client/1.1" [2020-04-19:02:23:28:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:28:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:28:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:28 +0000] "POST /invocations HTTP/1.1" 200 12197 "-" "Go-http-client/1.1" [2020-04-19:02:23:28:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:28:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:31 +0000] "POST /invocations HTTP/1.1" 200 12241 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:31 +0000] "POST /invocations HTTP/1.1" 200 12201 "-" "Go-http-client/1.1" [2020-04-19:02:23:31:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:32 +0000] "POST /invocations HTTP/1.1" 200 12224 "-" "Go-http-client/1.1" [2020-04-19:02:23:32:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:32 +0000] "POST /invocations HTTP/1.1" 200 12252 "-" "Go-http-client/1.1" [2020-04-19:02:23:32:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:31 +0000] "POST /invocations HTTP/1.1" 200 12241 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:31 +0000] "POST /invocations HTTP/1.1" 200 12201 "-" "Go-http-client/1.1" [2020-04-19:02:23:31:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:32 +0000] "POST /invocations HTTP/1.1" 200 12224 "-" "Go-http-client/1.1" [2020-04-19:02:23:32:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:32 +0000] "POST /invocations HTTP/1.1" 200 12252 "-" "Go-http-client/1.1" [2020-04-19:02:23:32:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:32:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:32:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:35 +0000] "POST /invocations HTTP/1.1" 200 12218 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:35 +0000] "POST /invocations HTTP/1.1" 200 12218 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:23:35 +0000] "POST /invocations HTTP/1.1" 200 12209 "-" "Go-http-client/1.1" [2020-04-19:02:23:35:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:35 +0000] "POST /invocations HTTP/1.1" 200 12220 "-" "Go-http-client/1.1" [2020-04-19:02:23:35:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:35:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:35 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" [2020-04-19:02:23:36:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:35 +0000] "POST /invocations HTTP/1.1" 200 12209 "-" "Go-http-client/1.1" [2020-04-19:02:23:35:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:35 +0000] "POST /invocations HTTP/1.1" 200 12220 "-" "Go-http-client/1.1" [2020-04-19:02:23:35:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:23:35:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:23:35 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" [2020-04-19:02:23:36:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-616940979481/sagemaker-xgboost-2020-04-19-02-19-21-199/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Using already existing model: sagemaker-xgboost-2020-04-19-02-04-54-307 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) print(next(gn)) print(next(gn)) print(next(gn)) ###Output (['quit', 'possibl', 'nicest', 'woman', 'show', 'busi', 'sexiest', 'debbi', 'give', 'anoth', 'fine', 'perform', 'although', 'work', 'american', 'nightmar', 'far', 'superior', 'still', 'worth', 'watch', 'film', 'cast', 'fill', 'typic', 'melros', 'place', 'type', 'chisel', 'featur', 'seduct', 'curv', 'never', 'seen', 'debbi', 'laura', 'nativo', 'actress', 'seen', 'similar', 'delta', 'delta', 'die', 'plot', 'center', 'around', 'group', 'california', 'arrog', 'initi', 'poor', 'naiv', 'debbi', 'rochon', 'cliqu', 'tell', 'murder', 'club', 'must', 'kill', 'someon', 'accept', 'debbi', 'want', 'noth', 'accept', 'cool', 'peopl', 'quickli', 'kill', 'person', 'group', 'must', 'decid', 'fell', 'joke', 'violenc', 'plenti', 'debbi', 'rochon', 'occasion', 'blood', 'splatter', 'murder', 'scene', 'done', 'face', 'gore', 'hound', 'sure', 'enjoy', 'nuditi', 'plenti', 'well', 'debbi', 'rochon', 'sever', 'nude', 'scene', 'mani', 'name', 'actress', 'actor', 'pool', 'parti', 'seem', 'excus', 'get', 'everyon', 'nake', 'man', 'woman', 'alik', 'juli', 'strain', 'also', 'topless', 'cameo', 'charact', 'gone', 'first', 'five', 'minut', 'stori', 'could', 'receiv', 'higher', 'vote', 'plot', 'interest', 'uniqu', 'plot', 'serv', 'filler', 'nude', 'scene', 'understand', 'b', 'rate', 'film', 'use', 'nuditi', 'often', 'borderlin', 'excess', 'act', 'act', 'sub', 'standard', 'say', 'least', 'rochon', 'alway', 'treat', 'easili', 'best', 'b', 'rate', 'actress', 'busi', 'today', 'charact', 'american', 'nightmar', 'superior', 'danni', 'wolsk', 'fine', 'job', 'debbi', 'object', 'lust', 'actor', 'noth', 'write'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'victorian', 'spill', 'playboy', 'ghetto', '21st', 'weari', 'reincarn'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'omin', 'banana', 'orchestr', 'dubiou', 'masterson', 'optimist', 'sophi'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') %%time # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-04-19 02:36:40 Starting - Starting the training job... 2020-04-19 02:36:41 Starting - Launching requested ML instances...... 2020-04-19 02:37:47 Starting - Preparing the instances for training... 2020-04-19 02:38:37 Downloading - Downloading input data... 2020-04-19 02:38:54 Training - Downloading the training image... 2020-04-19 02:39:27 Training - Training image download completed. Training in progress.INFO:sagemaker-containers:Imported framework sagemaker_xgboost_container.training INFO:sagemaker-containers:Failed to parse hyperparameter objective value binary:logistic to Json. Returning the value itself INFO:sagemaker-containers:No GPUs detected (normal if no gpus installed) INFO:sagemaker_xgboost_container.training:Running XGBoost Sagemaker in algorithm mode INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' [02:39:31] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, INFO:root:Determined delimiter of CSV input is ',' [02:39:32] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, INFO:root:Single node training. INFO:root:Train matrix has 15000 rows INFO:root:Validation matrix has 10000 rows [0]#011train-error:0.313533#011validation-error:0.3181 [1]#011train-error:0.292867#011validation-error:0.2994 [2]#011train-error:0.2826#011validation-error:0.2905 [3]#011train-error:0.280933#011validation-error:0.2866 [4]#011train-error:0.267667#011validation-error:0.2751 [5]#011train-error:0.2646#011validation-error:0.2728 [6]#011train-error:0.258533#011validation-error:0.27 [7]#011train-error:0.2496#011validation-error:0.2614 [8]#011train-error:0.2412#011validation-error:0.2539 [9]#011train-error:0.238867#011validation-error:0.2538 [10]#011train-error:0.2358#011validation-error:0.2515 [11]#011train-error:0.228333#011validation-error:0.2439 [12]#011train-error:0.227067#011validation-error:0.2421 [13]#011train-error:0.220867#011validation-error:0.2357 [14]#011train-error:0.217667#011validation-error:0.233 [15]#011train-error:0.2128#011validation-error:0.2275 [16]#011train-error:0.2092#011validation-error:0.2258 [17]#011train-error:0.206467#011validation-error:0.2244 [18]#011train-error:0.201733#011validation-error:0.2199 [19]#011train-error:0.2004#011validation-error:0.219 [20]#011train-error:0.1968#011validation-error:0.2165 [21]#011train-error:0.196133#011validation-error:0.2152 [22]#011train-error:0.193933#011validation-error:0.214 [23]#011train-error:0.190133#011validation-error:0.2132 [24]#011train-error:0.188333#011validation-error:0.211 [25]#011train-error:0.186867#011validation-error:0.21 [26]#011train-error:0.1852#011validation-error:0.2091 [27]#011train-error:0.182533#011validation-error:0.2091 [28]#011train-error:0.181467#011validation-error:0.2093 [29]#011train-error:0.178933#011validation-error:0.2074 [30]#011train-error:0.1788#011validation-error:0.2066 [31]#011train-error:0.176667#011validation-error:0.206 [32]#011train-error:0.175467#011validation-error:0.2054 [33]#011train-error:0.173733#011validation-error:0.2032 [34]#011train-error:0.1724#011validation-error:0.2012 [35]#011train-error:0.1722#011validation-error:0.2003 [36]#011train-error:0.171533#011validation-error:0.1995 [37]#011train-error:0.170933#011validation-error:0.1978 [38]#011train-error:0.1706#011validation-error:0.199 [39]#011train-error:0.166867#011validation-error:0.1937 [40]#011train-error:0.165733#011validation-error:0.1935 [41]#011train-error:0.164267#011validation-error:0.1937 [42]#011train-error:0.163933#011validation-error:0.1933 [43]#011train-error:0.162467#011validation-error:0.1912 [44]#011train-error:0.1618#011validation-error:0.1906 [45]#011train-error:0.1618#011validation-error:0.1921 [46]#011train-error:0.161733#011validation-error:0.1916 [47]#011train-error:0.1614#011validation-error:0.1911 [48]#011train-error:0.161067#011validation-error:0.191 [49]#011train-error:0.159133#011validation-error:0.1907 [50]#011train-error:0.159067#011validation-error:0.1897 [51]#011train-error:0.158733#011validation-error:0.1898 [52]#011train-error:0.157733#011validation-error:0.1888 [53]#011train-error:0.156267#011validation-error:0.1878 [54]#011train-error:0.1562#011validation-error:0.1882 [55]#011train-error:0.154733#011validation-error:0.1873 [56]#011train-error:0.154067#011validation-error:0.1868 [57]#011train-error:0.152933#011validation-error:0.1871 [58]#011train-error:0.1522#011validation-error:0.1866 [59]#011train-error:0.151867#011validation-error:0.1861 [60]#011train-error:0.1508#011validation-error:0.1861 [61]#011train-error:0.149667#011validation-error:0.185 [62]#011train-error:0.149667#011validation-error:0.1854 [63]#011train-error:0.149333#011validation-error:0.1857 [64]#011train-error:0.148333#011validation-error:0.1846 [65]#011train-error:0.147267#011validation-error:0.1849 [66]#011train-error:0.146467#011validation-error:0.1848 [67]#011train-error:0.146067#011validation-error:0.1856 [68]#011train-error:0.145733#011validation-error:0.1861 [69]#011train-error:0.144533#011validation-error:0.1851 [70]#011train-error:0.144533#011validation-error:0.1852 [71]#011train-error:0.144267#011validation-error:0.1858 [72]#011train-error:0.1434#011validation-error:0.185 [73]#011train-error:0.143#011validation-error:0.1856 [74]#011train-error:0.1424#011validation-error:0.1863 2020-04-19 02:41:07 Uploading - Uploading generated training model 2020-04-19 02:41:07 Completed - Training job completed Training seconds: 150 Billable seconds: 150 CPU times: user 625 ms, sys: 33 ms, total: 658 ms Wall time: 4min 43s ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code %%time # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output CPU times: user 17.9 ms, sys: 0 ns, total: 17.9 ms Wall time: 398 ms ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code %%time # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ......................[2020-04-19 02:44:53 +0000] [14] [INFO] Starting gunicorn 19.10.0 [2020-04-19 02:44:53 +0000] [14] [INFO] Listening at: unix:/tmp/gunicorn.sock (14) [2020-04-19 02:44:53 +0000] [14] [INFO] Using worker: gevent [2020-04-19 02:44:53 +0000] [21] [INFO] Booting worker with pid: 21 [2020-04-19 02:44:53 +0000] [22] [INFO] Booting worker with pid: 22 [2020-04-19 02:44:53 +0000] [23] [INFO] Booting worker with pid: 23 [2020-04-19 02:44:53 +0000] [27] [INFO] Booting worker with pid: 27 [2020-04-19:02:45:26:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [19/Apr/2020:02:45:26 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:26 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-04-19:02:45:29:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:45:29:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:29:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:45:30:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:30:INFO] No GPUs detected (normal if no gpus installed) [2020-04-19:02:45:30:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:30:INFO] Determined delimiter of CSV input is ',' 2020-04-19T02:45:26.685:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD 169.254.255.130 - - [19/Apr/2020:02:45:32 +0000] "POST /invocations HTTP/1.1" 200 12068 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:32 +0000] "POST /invocations HTTP/1.1" 200 12068 "-" "Go-http-client/1.1" [2020-04-19:02:45:32:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:33 +0000] "POST /invocations HTTP/1.1" 200 12085 "-" "Go-http-client/1.1" [2020-04-19:02:45:33:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:33 +0000] "POST /invocations HTTP/1.1" 200 12056 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:33 +0000] "POST /invocations HTTP/1.1" 200 12076 "-" "Go-http-client/1.1" [2020-04-19:02:45:33:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:32:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:33 +0000] "POST /invocations HTTP/1.1" 200 12085 "-" "Go-http-client/1.1" [2020-04-19:02:45:33:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:33 +0000] "POST /invocations HTTP/1.1" 200 12056 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:33 +0000] "POST /invocations HTTP/1.1" 200 12076 "-" "Go-http-client/1.1" [2020-04-19:02:45:33:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:33:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:33:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:36 +0000] "POST /invocations HTTP/1.1" 200 12110 "-" "Go-http-client/1.1" [2020-04-19:02:45:36:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:36 +0000] "POST /invocations HTTP/1.1" 200 12110 "-" "Go-http-client/1.1" [2020-04-19:02:45:36:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:36 +0000] "POST /invocations HTTP/1.1" 200 12079 "-" "Go-http-client/1.1" [2020-04-19:02:45:37:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:37 +0000] "POST /invocations HTTP/1.1" 200 12099 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:37 +0000] "POST /invocations HTTP/1.1" 200 12092 "-" "Go-http-client/1.1" [2020-04-19:02:45:37:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:37:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:36 +0000] "POST /invocations HTTP/1.1" 200 12079 "-" "Go-http-client/1.1" [2020-04-19:02:45:37:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:37 +0000] "POST /invocations HTTP/1.1" 200 12099 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:37 +0000] "POST /invocations HTTP/1.1" 200 12092 "-" "Go-http-client/1.1" [2020-04-19:02:45:37:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:37:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:39 +0000] "POST /invocations HTTP/1.1" 200 12094 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:39 +0000] "POST /invocations HTTP/1.1" 200 12094 "-" "Go-http-client/1.1" [2020-04-19:02:45:40:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:40 +0000] "POST /invocations HTTP/1.1" 200 12097 "-" "Go-http-client/1.1" [2020-04-19:02:45:40:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:40 +0000] "POST /invocations HTTP/1.1" 200 12101 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:40 +0000] "POST /invocations HTTP/1.1" 200 12106 "-" "Go-http-client/1.1" [2020-04-19:02:45:40:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:40 +0000] "POST /invocations HTTP/1.1" 200 12097 "-" "Go-http-client/1.1" [2020-04-19:02:45:40:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:40 +0000] "POST /invocations HTTP/1.1" 200 12101 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:40 +0000] "POST /invocations HTTP/1.1" 200 12106 "-" "Go-http-client/1.1" [2020-04-19:02:45:41:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:41:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:41:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:41:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:43 +0000] "POST /invocations HTTP/1.1" 200 12128 "-" "Go-http-client/1.1" [2020-04-19:02:45:44:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:44 +0000] "POST /invocations HTTP/1.1" 200 12124 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:44 +0000] "POST /invocations HTTP/1.1" 200 12108 "-" "Go-http-client/1.1" [2020-04-19:02:45:44:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:44:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:43 +0000] "POST /invocations HTTP/1.1" 200 12128 "-" "Go-http-client/1.1" [2020-04-19:02:45:44:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:44 +0000] "POST /invocations HTTP/1.1" 200 12124 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:44 +0000] "POST /invocations HTTP/1.1" 200 12108 "-" "Go-http-client/1.1" [2020-04-19:02:45:44:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:44:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:46 +0000] "POST /invocations HTTP/1.1" 200 12102 "-" "Go-http-client/1.1" [2020-04-19:02:45:47:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:47 +0000] "POST /invocations HTTP/1.1" 200 12112 "-" "Go-http-client/1.1" [2020-04-19:02:45:47:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:46 +0000] "POST /invocations HTTP/1.1" 200 12102 "-" "Go-http-client/1.1" [2020-04-19:02:45:47:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:47 +0000] "POST /invocations HTTP/1.1" 200 12112 "-" "Go-http-client/1.1" [2020-04-19:02:45:47:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:47 +0000] "POST /invocations HTTP/1.1" 200 12098 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:47 +0000] "POST /invocations HTTP/1.1" 200 12098 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:47 +0000] "POST /invocations HTTP/1.1" 200 12100 "-" "Go-http-client/1.1" [2020-04-19:02:45:48:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:48:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:47 +0000] "POST /invocations HTTP/1.1" 200 12100 "-" "Go-http-client/1.1" [2020-04-19:02:45:48:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:48:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:50 +0000] "POST /invocations HTTP/1.1" 200 12094 "-" "Go-http-client/1.1" [2020-04-19:02:45:50:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:50 +0000] "POST /invocations HTTP/1.1" 200 12094 "-" "Go-http-client/1.1" [2020-04-19:02:45:50:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:51 +0000] "POST /invocations HTTP/1.1" 200 12124 "-" "Go-http-client/1.1" [2020-04-19:02:45:51:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:51 +0000] "POST /invocations HTTP/1.1" 200 12123 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:51 +0000] "POST /invocations HTTP/1.1" 200 12091 "-" "Go-http-client/1.1" [2020-04-19:02:45:51:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:51:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:51 +0000] "POST /invocations HTTP/1.1" 200 12124 "-" "Go-http-client/1.1" [2020-04-19:02:45:51:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [19/Apr/2020:02:45:51 +0000] "POST /invocations HTTP/1.1" 200 12123 "-" "Go-http-client/1.1" 169.254.255.130 - - [19/Apr/2020:02:45:51 +0000] "POST /invocations HTTP/1.1" 200 12091 "-" "Go-http-client/1.1" [2020-04-19:02:45:51:INFO] Determined delimiter of CSV input is ',' [2020-04-19:02:45:51:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/469.6 KiB (3.5 MiB/s) with 1 file(s) remaining Completed 469.6 KiB/469.6 KiB (6.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-616940979481/sagemaker-xgboost-2020-04-19-02-41-23-905/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) from sklearn.metrics import roc_curve, auc from sklearn.metrics import roc_auc_score import matplotlib.pyplot as plt auc = roc_auc_score(test_Y, predictions) print('auc: ', auc) print('') # Compute ROC curve and ROC area for each class fpr, tpr, _ = roc_curve(test_Y, predictions) print('fpr: ', fpr) print('tpr: ', tpr) print('') plt.figure() lw = 2 plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.3f)' % auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show() ###Output auc: 0.8260000000000001 fpr: [0. 0.21392 1. ] tpr: [0. 0.86592 1. ] ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ---------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. ###Code # Make sure that we use SageMaker 1.x !pip install sagemaker==1.72.0 ###Output Collecting sagemaker==1.72.0 Downloading sagemaker-1.72.0.tar.gz (297 kB)  |████████████████████████████████| 297 kB 15.6 MB/s eta 0:00:01 [?25hRequirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.17.33) Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5) Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.15.2) Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.5.3) Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5) Collecting smdebug-rulesconfig==0.1.4 Downloading smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB) Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.7.0) Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.9) Requirement already satisfied: botocore<1.21.0,>=1.20.33 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.20.33) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0) Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.4) Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.33->boto3>=1.14.12->sagemaker==1.72.0) (1.26.3) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.33->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1) Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0) Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3) Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7) Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0) Building wheels for collected packages: sagemaker Building wheel for sagemaker (setup.py) ... [?25ldone [?25h Created wheel for sagemaker: filename=sagemaker-1.72.0-py2.py3-none-any.whl size=386358 sha256=e3821c9fd16b2882d1663d41e9d24a9038e27ff2af7604c807d9f27b43694b81 Stored in directory: /home/ec2-user/.cache/pip/wheels/c3/58/70/85faf4437568bfaa4c419937569ba1fe54d44c5db42406bbd7 Successfully built sagemaker Installing collected packages: smdebug-rulesconfig, sagemaker Attempting uninstall: smdebug-rulesconfig Found existing installation: smdebug-rulesconfig 1.0.1 Uninstalling smdebug-rulesconfig-1.0.1: Successfully uninstalled smdebug-rulesconfig-1.0.1 Attempting uninstall: sagemaker Found existing installation: sagemaker 2.30.0 Uninstalling sagemaker-2.30.0: Successfully uninstalled sagemaker-2.30.0 Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4 ###Markdown Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2021-03-26 15:46:39-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 24.0MB/s in 4.5s 2021-03-26 15:46:44 (18.0 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2021-03-26 15:49:14 Starting - Starting the training job... 2021-03-26 15:49:16 Starting - Launching requested ML instances...... 2021-03-26 15:50:44 Starting - Preparing the instances for training...... 2021-03-26 15:51:34 Downloading - Downloading input data... 2021-03-26 15:52:05 Training - Downloading the training image... 2021-03-26 15:52:26 Training - Training image download completed. Training in progress.Arguments: train [2021-03-26:15:52:27:INFO] Running standalone xgboost training. [2021-03-26:15:52:27:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8437.06mb [2021-03-26:15:52:27:INFO] Determined delimiter of CSV input is ',' [15:52:27] S3DistributionType set as FullyReplicated [15:52:29] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2021-03-26:15:52:29:INFO] Determined delimiter of CSV input is ',' [15:52:29] S3DistributionType set as FullyReplicated [15:52:30] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [15:52:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 2 pruned nodes, max_depth=5 [0]#011train-error:0.296467#011validation-error:0.3055 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [15:52:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [1]#011train-error:0.279067#011validation-error:0.2901 [15:52:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [2]#011train-error:0.279267#011validation-error:0.2882 [15:52:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [3]#011train-error:0.274667#011validation-error:0.2837 [15:52:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.269067#011validation-error:0.2745 [15:52:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.254467#011validation-error:0.2652 [15:52:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.249533#011validation-error:0.2638 [15:52:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 0 pruned nodes, max_depth=5 [7]#011train-error:0.241933#011validation-error:0.2508 [15:52:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.230733#011validation-error:0.2443 [15:52:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.227467#011validation-error:0.2442 [15:52:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [10]#011train-error:0.220133#011validation-error:0.237 [15:52:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [11]#011train-error:0.2164#011validation-error:0.2295 [15:52:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [12]#011train-error:0.211267#011validation-error:0.2267 [15:52:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [13]#011train-error:0.208#011validation-error:0.2216 [15:52:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.204267#011validation-error:0.2184 [15:52:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [15]#011train-error:0.201#011validation-error:0.215 [15:52:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.197733#011validation-error:0.2124 [15:52:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [17]#011train-error:0.193667#011validation-error:0.2109 [15:52:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 14 pruned nodes, max_depth=5 [18]#011train-error:0.191067#011validation-error:0.2054 [15:53:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [19]#011train-error:0.189933#011validation-error:0.206 [15:53:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [20]#011train-error:0.186333#011validation-error:0.2034 [15:53:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 0 pruned nodes, max_depth=5 [21]#011train-error:0.183333#011validation-error:0.1979 [15:53:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.182667#011validation-error:0.1984 [15:53:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.179867#011validation-error:0.1967 [15:53:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [24]#011train-error:0.176333#011validation-error:0.1957 [15:53:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.173133#011validation-error:0.1941 [15:53:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [26]#011train-error:0.172133#011validation-error:0.1931 [15:53:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [27]#011train-error:0.1702#011validation-error:0.1938 [15:53:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.1678#011validation-error:0.1898 [15:53:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 20 pruned nodes, max_depth=5 [29]#011train-error:0.165333#011validation-error:0.1885 [15:53:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.163133#011validation-error:0.1877 [15:53:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.163333#011validation-error:0.1864 [15:53:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.1618#011validation-error:0.1853 [15:53:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [33]#011train-error:0.160267#011validation-error:0.1841 [15:53:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [34]#011train-error:0.157267#011validation-error:0.182 [15:53:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [35]#011train-error:0.157533#011validation-error:0.1807 [15:53:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [36]#011train-error:0.155933#011validation-error:0.182 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ...................................................................................................................................Arguments: serve [2021-03-26 16:19:54 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2021-03-26 16:19:54 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-03-26 16:19:54 +0000] [1] [INFO] Using worker: gevent [2021-03-26 16:19:54 +0000] [20] [INFO] Booting worker with pid: 20 [2021-03-26 16:19:54 +0000] [21] [INFO] Booting worker with pid: 21 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['requests.packages.urllib3.util (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/__init__.py)', 'requests.packages.urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/ssl_.py)']. monkey.patch_all(subprocess=True) [2021-03-26:16:19:54:INFO] Model loaded successfully for worker : 20 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['requests.packages.urllib3.util (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/__init__.py)', 'requests.packages.urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/ssl_.py)']. monkey.patch_all(subprocess=True) [2021-03-26:16:19:54:INFO] Model loaded successfully for worker : 21 [2021-03-26 16:19:54 +0000] [22] [INFO] Booting worker with pid: 22 [2021-03-26 16:19:54 +0000] [23] [INFO] Booting worker with pid: 23 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['requests.packages.urllib3.util (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/__init__.py)', 'requests.packages.urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/ssl_.py)']. monkey.patch_all(subprocess=True) [2021-03-26:16:19:54:INFO] Model loaded successfully for worker : 22 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['requests.packages.urllib3.util (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/__init__.py)', 'requests.packages.urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/ssl_.py)']. monkey.patch_all(subprocess=True) [2021-03-26:16:19:54:INFO] Model loaded successfully for worker : 23 [2021-03-26:16:20:01:INFO] Sniff delimiter as ',' [2021-03-26:16:20:01:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:01:INFO] Sniff delimiter as ',' [2021-03-26:16:20:01:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:02:INFO] Sniff delimiter as ',' [2021-03-26:16:20:02:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:02:INFO] Sniff delimiter as ',' [2021-03-26:16:20:02:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:01:INFO] Sniff delimiter as ',' [2021-03-26:16:20:01:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:01:INFO] Sniff delimiter as ',' [2021-03-26:16:20:01:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:02:INFO] Sniff delimiter as ',' [2021-03-26:16:20:02:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:02:INFO] Sniff delimiter as ',' [2021-03-26:16:20:02:INFO] Determined delimiter of CSV input is ',' 2021-03-26T16:19:58.438:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2021-03-26:16:20:05:INFO] Sniff delimiter as ',' [2021-03-26:16:20:05:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:05:INFO] Sniff delimiter as ',' [2021-03-26:16:20:05:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:05:INFO] Sniff delimiter as ',' [2021-03-26:16:20:05:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:05:INFO] Sniff delimiter as ',' [2021-03-26:16:20:05:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:05:INFO] Sniff delimiter as ',' [2021-03-26:16:20:05:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:06:INFO] Sniff delimiter as ',' [2021-03-26:16:20:06:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:05:INFO] Sniff delimiter as ',' [2021-03-26:16:20:05:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:06:INFO] Sniff delimiter as ',' [2021-03-26:16:20:06:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:09:INFO] Sniff delimiter as ',' [2021-03-26:16:20:09:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:09:INFO] Sniff delimiter as ',' [2021-03-26:16:20:09:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:09:INFO] Sniff delimiter as ',' [2021-03-26:16:20:09:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:09:INFO] Sniff delimiter as ',' [2021-03-26:16:20:09:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:13:INFO] Sniff delimiter as ',' [2021-03-26:16:20:13:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:13:INFO] Sniff delimiter as ',' [2021-03-26:16:20:13:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:13:INFO] Sniff delimiter as ',' [2021-03-26:16:20:13:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:13:INFO] Sniff delimiter as ',' [2021-03-26:16:20:13:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:13:INFO] Sniff delimiter as ',' [2021-03-26:16:20:13:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:13:INFO] Sniff delimiter as ',' [2021-03-26:16:20:13:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:13:INFO] Sniff delimiter as ',' [2021-03-26:16:20:13:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:13:INFO] Sniff delimiter as ',' [2021-03-26:16:20:13:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:20:INFO] Sniff delimiter as ',' [2021-03-26:16:20:20:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:21:INFO] Sniff delimiter as ',' [2021-03-26:16:20:21:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:21:INFO] Sniff delimiter as ',' [2021-03-26:16:20:21:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:21:INFO] Sniff delimiter as ',' [2021-03-26:16:20:21:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:20:INFO] Sniff delimiter as ',' [2021-03-26:16:20:20:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:21:INFO] Sniff delimiter as ',' [2021-03-26:16:20:21:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:21:INFO] Sniff delimiter as ',' [2021-03-26:16:20:21:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:21:INFO] Sniff delimiter as ',' [2021-03-26:16:20:21:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:24:INFO] Sniff delimiter as ',' [2021-03-26:16:20:24:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:24:INFO] Sniff delimiter as ',' [2021-03-26:16:20:24:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:24:INFO] Sniff delimiter as ',' [2021-03-26:16:20:24:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:25:INFO] Sniff delimiter as ',' [2021-03-26:16:20:25:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:25:INFO] Sniff delimiter as ',' [2021-03-26:16:20:25:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:24:INFO] Sniff delimiter as ',' [2021-03-26:16:20:24:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:25:INFO] Sniff delimiter as ',' [2021-03-26:16:20:25:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:25:INFO] Sniff delimiter as ',' [2021-03-26:16:20:25:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:28:INFO] Sniff delimiter as ',' [2021-03-26:16:20:28:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:28:INFO] Sniff delimiter as ',' [2021-03-26:16:20:28:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:28:INFO] Sniff delimiter as ',' [2021-03-26:16:20:28:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:28:INFO] Sniff delimiter as ',' [2021-03-26:16:20:28:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:28:INFO] Sniff delimiter as ',' [2021-03-26:16:20:28:INFO] Determined delimiter of CSV input is ',' [2021-03-26:16:20:28:INFO] Sniff delimiter as ',' [2021-03-26:16:20:28:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/473.9 KiB (1.8 MiB/s) with 1 file(s) remaining Completed 473.9 KiB/473.9 KiB (3.2 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-534407698314/xgboost-2021-03-26-15-58-02-023/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2021-03-26-15-49-13-937 ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/474.1 KiB (1.8 MiB/s) with 1 file(s) remaining Completed 474.1 KiB/474.1 KiB (3.2 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-534407698314/xgboost-2021-03-26-16-21-26-911/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2021-03-26-15-49-13-937 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['utterli', 'tactic', 'strang', 'watch', 'kinki', 'moment', 'drop', 'dead', 'gorgeou', 'blond', 'act', 'pull', 'string', 'doll', 'rich', 'folk', 'pointless', 'undoubtedli', 'compel', 'late', 'night', 'featur', 'unhing', 'french', 'product', 'stew', 'perplexedli', 'unfocus', 'idea', 'random', 'plot', 'illustr', 'centr', 'charismat', 'star', 'somewhat', 'anti', 'hero', 'alain', 'delon', 'charl', 'bronson', 'realli', 'get', 'much', 'especi', 'confin', 'lengthi', 'mid', 'section', 'hide', 'build', 'christma', 'break', 'crack', 'safe', '10', '000', 'possibl', 'combin', 'oh', 'fun', 'odd', 'intrigu', 'relationship', 'form', 'delon', 'bronson', 'charact', 'manipul', 'battl', 'will', 'childishli', 'sli', 'game', 'two', 'come', 'understand', 'see', 'honour', 'involv', 'mutual', 'respect', 'would', 'go', 'play', 'part', 'twisti', 'second', 'half', 'stori', 'undetect', 'curv', 'ball', 'still', 'encount', 'earli', 'suggest', 'get', 'vagu', 'magnifi', 'happen', 'end', 'might', 'make', 'jump', 'yeeeeaaaaahhhhhhhhh', 'glad', 'get', 'system', 'pace', 'terribl', 'slow', 'placidli', 'measur', 'seem', 'purpos', 'done', 'exhaust', 'edgi', 'nervou', 'underlin', 'tension', 'watch', 'process', 'repeat', 'know', 'someth', 'quit', 'right', 'scheme', 'eventu', 'come', 'play', 'everyth', 'happen', 'feel', 'spontan', 'climax', 'payoff', 'haunt', 'taut', 'complex', 'script', 'probabl', 'littl', 'crafti', 'good', 'neat', 'novelti', 'coin', 'glass', 'liquid', 'tri', 'spill', 'visual', 'symbol', 'jean', 'herman', 'direct', 'effici', 'sophist', 'low', 'key', 'get', 'tad', 'artifici', 'infus', 'unwelcom', 'ici', 'atmospher', 'sound', 'fx', 'featur', 'potent', 'note', 'francoi', 'deroubaix', 'funki', 'score', 'mainli', 'kept', 'wrap', 'sizzl', 'open', 'top', 'drawer', 'delon', 'quit', 'steeli', 'bronson', 'jovial', 'turn', 'solid', 'work', 'tremend', 'bernard', 'fresson', 'chalk', 'attitud', 'inspector', 'know', 'go', 'led', 'attract', 'femal', 'cast', 'featur', 'abl', 'support', 'brigitt', 'fossey', 'olga', 'georg', 'picot', 'cryptic', 'directionless', 'polish', 'crime', 'drama', 'maintain', 'two', 'lead', 'bizarr', 'inclus'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:489: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'spill', 'weari', 'ghetto', 'playboy', 'reincarn', '21st', 'victorian'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'omin', 'optimist', 'dubiou', 'orchestr', 'sophi', 'masterson', 'banana'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2021-03-26 16:56:45 Starting - Starting the training job... 2021-03-26 16:56:47 Starting - Launching requested ML instances......... 2021-03-26 16:58:19 Starting - Preparing the instances for training... 2021-03-26 16:59:08 Downloading - Downloading input data... 2021-03-26 16:59:26 Training - Downloading the training image.. 2021-03-26 17:00:00 Training - Training image download completed. Training in progress.Arguments: train [2021-03-26:17:00:00:INFO] Running standalone xgboost training. [2021-03-26:17:00:00:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8410.87mb [2021-03-26:17:00:00:INFO] Determined delimiter of CSV input is ',' [17:00:00] S3DistributionType set as FullyReplicated [17:00:02] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2021-03-26:17:00:02:INFO] Determined delimiter of CSV input is ',' [17:00:02] S3DistributionType set as FullyReplicated [17:00:04] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [17:00:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [0]#011train-error:0.302733#011validation-error:0.3092 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [17:00:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [1]#011train-error:0.2978#011validation-error:0.3003 [17:00:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.285533#011validation-error:0.2901 [17:00:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.28#011validation-error:0.2876 [17:00:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 14 pruned nodes, max_depth=5 [4]#011train-error:0.275467#011validation-error:0.284 [17:00:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [5]#011train-error:0.267467#011validation-error:0.2783 [17:00:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.254867#011validation-error:0.2682 [17:00:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.250133#011validation-error:0.2618 [17:00:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.235667#011validation-error:0.2453 [17:00:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [9]#011train-error:0.231333#011validation-error:0.2434 [17:00:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [10]#011train-error:0.226467#011validation-error:0.2392 [17:00:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [11]#011train-error:0.222133#011validation-error:0.2353 [17:00:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [12]#011train-error:0.220067#011validation-error:0.2336 [17:00:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [13]#011train-error:0.214133#011validation-error:0.2313 [17:00:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.21#011validation-error:0.2268 [17:00:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [15]#011train-error:0.207467#011validation-error:0.2254 [17:00:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.2052#011validation-error:0.2236 [17:00:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [17]#011train-error:0.203#011validation-error:0.2219 [17:00:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [18]#011train-error:0.200467#011validation-error:0.2195 [17:00:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.197333#011validation-error:0.2165 [17:00:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [20]#011train-error:0.195933#011validation-error:0.214 [17:00:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.194467#011validation-error:0.2113 [17:00:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [22]#011train-error:0.190067#011validation-error:0.2089 [17:00:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [23]#011train-error:0.1868#011validation-error:0.2077 [17:00:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [24]#011train-error:0.184733#011validation-error:0.2082 [17:00:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.1828#011validation-error:0.207 [17:00:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 14 pruned nodes, max_depth=5 [26]#011train-error:0.180533#011validation-error:0.2051 [17:00:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [27]#011train-error:0.1786#011validation-error:0.2025 [17:00:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [28]#011train-error:0.1784#011validation-error:0.2 [17:00:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [29]#011train-error:0.177333#011validation-error:0.1991 [17:00:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [30]#011train-error:0.176#011validation-error:0.1983 [17:00:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [31]#011train-error:0.174#011validation-error:0.1977 [17:00:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [32]#011train-error:0.172133#011validation-error:0.1972 [17:00:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [33]#011train-error:0.169333#011validation-error:0.1954 [17:00:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [34]#011train-error:0.167733#011validation-error:0.1948 [17:00:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [35]#011train-error:0.167#011validation-error:0.1929 [17:00:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.165867#011validation-error:0.193 [17:00:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.1664#011validation-error:0.1928 [17:01:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [38]#011train-error:0.164867#011validation-error:0.1925 [17:01:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 16 pruned nodes, max_depth=5 [39]#011train-error:0.164667#011validation-error:0.1921 [17:01:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [40]#011train-error:0.1626#011validation-error:0.1905 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ................................Arguments: serve [2021-03-26 17:08:12 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2021-03-26 17:08:12 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-03-26 17:08:12 +0000] [1] [INFO] Using worker: gevent [2021-03-26 17:08:12 +0000] [20] [INFO] Booting worker with pid: 20 [2021-03-26 17:08:12 +0000] [21] [INFO] Booting worker with pid: 21 [2021-03-26 17:08:12 +0000] [22] [INFO] Booting worker with pid: 22 [2021-03-26 17:08:12 +0000] [23] [INFO] Booting worker with pid: 23 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['requests.packages.urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/ssl_.py)', 'requests.packages.urllib3.util (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['requests.packages.urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/ssl_.py)', 'requests.packages.urllib3.util (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-03-26:17:08:12:INFO] Model loaded successfully for worker : 20 [2021-03-26:17:08:12:INFO] Model loaded successfully for worker : 21 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['requests.packages.urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/ssl_.py)', 'requests.packages.urllib3.util (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-03-26:17:08:12:INFO] Model loaded successfully for worker : 22 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['requests.packages.urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/ssl_.py)', 'requests.packages.urllib3.util (/opt/amazon/lib/python3.7/site-packages/requests/packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-03-26:17:08:12:INFO] Model loaded successfully for worker : 23 [2021-03-26:17:08:19:INFO] Sniff delimiter as ',' [2021-03-26:17:08:19:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:19:INFO] Sniff delimiter as ',' [2021-03-26:17:08:19:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:19:INFO] Sniff delimiter as ',' [2021-03-26:17:08:19:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:19:INFO] Sniff delimiter as ',' [2021-03-26:17:08:19:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:19:INFO] Sniff delimiter as ',' [2021-03-26:17:08:19:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:19:INFO] Sniff delimiter as ',' [2021-03-26:17:08:19:INFO] Determined delimiter of CSV input is ',' 2021-03-26T17:08:16.662:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2021-03-26:17:08:22:INFO] Sniff delimiter as ',' [2021-03-26:17:08:22:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:22:INFO] Sniff delimiter as ',' [2021-03-26:17:08:22:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:22:INFO] Sniff delimiter as ',' [2021-03-26:17:08:22:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:23:INFO] Sniff delimiter as ',' [2021-03-26:17:08:22:INFO] Sniff delimiter as ',' [2021-03-26:17:08:22:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:22:INFO] Sniff delimiter as ',' [2021-03-26:17:08:22:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:22:INFO] Sniff delimiter as ',' [2021-03-26:17:08:22:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:23:INFO] Sniff delimiter as ',' [2021-03-26:17:08:23:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:23:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:26:INFO] Sniff delimiter as ',' [2021-03-26:17:08:26:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:26:INFO] Sniff delimiter as ',' [2021-03-26:17:08:26:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:26:INFO] Sniff delimiter as ',' [2021-03-26:17:08:26:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:26:INFO] Sniff delimiter as ',' [2021-03-26:17:08:26:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:27:INFO] Sniff delimiter as ',' [2021-03-26:17:08:27:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:26:INFO] Sniff delimiter as ',' [2021-03-26:17:08:26:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:26:INFO] Sniff delimiter as ',' [2021-03-26:17:08:26:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:27:INFO] Sniff delimiter as ',' [2021-03-26:17:08:27:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:30:INFO] Sniff delimiter as ',' [2021-03-26:17:08:30:INFO] Sniff delimiter as ',' [2021-03-26:17:08:30:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:30:INFO] Sniff delimiter as ',' [2021-03-26:17:08:30:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:30:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:30:INFO] Sniff delimiter as ',' [2021-03-26:17:08:30:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:30:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:30:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:34:INFO] Sniff delimiter as ',' [2021-03-26:17:08:34:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:34:INFO] Sniff delimiter as ',' [2021-03-26:17:08:34:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:34:INFO] Sniff delimiter as ',' [2021-03-26:17:08:34:INFO] Sniff delimiter as ',' [2021-03-26:17:08:34:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:34:INFO] Sniff delimiter as ',' [2021-03-26:17:08:34:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:34:INFO] Sniff delimiter as ',' [2021-03-26:17:08:34:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:34:INFO] Sniff delimiter as ',' [2021-03-26:17:08:34:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:34:INFO] Determined delimiter of CSV input is ',' [2021-03-26:17:08:34:INFO] Sniff delimiter as ',' [2021-03-26:17:08:34:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/470.4 KiB (2.5 MiB/s) with 1 file(s) remaining Completed 470.4 KiB/470.4 KiB (4.3 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-534407698314/xgboost-2021-03-26-17-03-01-003/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() test_X=xgb_transformer=xgb=None !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir !cat $data_dir/new_data.csv ###Output _____no_output_____ ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output _____no_output_____ ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output _____no_output_____ ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output _____no_output_____ ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. ###Code # Make sure that we use SageMaker 1.x !pip install sagemaker==1.72.0 ###Output Requirement already satisfied: sagemaker==1.72.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.72.0) Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.15.2) Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.5.3) Requirement already satisfied: smdebug-rulesconfig==0.1.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.4) Requirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.17.68) Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5) Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5) Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.7.0) Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.9) Requirement already satisfied: botocore<1.21.0,>=1.20.68 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.20.68) Requirement already satisfied: s3transfer<0.5.0,>=0.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.4.2) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.68->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1) Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.68->boto3>=1.14.12->sagemaker==1.72.0) (1.26.4) Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3) Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0) Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7) Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0) ###Markdown Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer #from sklearn.externals import joblib import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2021-05-14 07:26:05 Starting - Starting the training job... 2021-05-14 07:26:09 Starting - Launching requested ML instances...... 2021-05-14 07:27:12 Starting - Preparing the instances for training...... 2021-05-14 07:28:21 Downloading - Downloading input data... 2021-05-14 07:29:03 Training - Training image download completed. Training in progress..Arguments: train [2021-05-14:07:29:04:INFO] Running standalone xgboost training. [2021-05-14:07:29:04:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8373.21mb [2021-05-14:07:29:04:INFO] Determined delimiter of CSV input is ',' [07:29:04] S3DistributionType set as FullyReplicated [07:29:06] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2021-05-14:07:29:06:INFO] Determined delimiter of CSV input is ',' [07:29:06] S3DistributionType set as FullyReplicated [07:29:07] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [07:29:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [0]#011train-error:0.300467#011validation-error:0.2972 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [07:29:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [1]#011train-error:0.281667#011validation-error:0.2818 [07:29:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.279667#011validation-error:0.28 [07:29:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [3]#011train-error:0.2634#011validation-error:0.2668 [07:29:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.2616#011validation-error:0.267 [07:29:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 0 pruned nodes, max_depth=5 [5]#011train-error:0.258667#011validation-error:0.2643 [07:29:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.243867#011validation-error:0.2506 [07:29:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 0 pruned nodes, max_depth=5 [7]#011train-error:0.228#011validation-error:0.237 [07:29:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 0 pruned nodes, max_depth=5 [8]#011train-error:0.230133#011validation-error:0.2355 [07:29:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.224333#011validation-error:0.231 [07:29:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.216533#011validation-error:0.2236 [07:29:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [11]#011train-error:0.2098#011validation-error:0.2199 [07:29:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [12]#011train-error:0.207867#011validation-error:0.2176 [07:29:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.203667#011validation-error:0.2155 [07:29:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [14]#011train-error:0.201733#011validation-error:0.2123 [07:29:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [15]#011train-error:0.197333#011validation-error:0.2091 [07:29:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.194733#011validation-error:0.2044 [07:29:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [17]#011train-error:0.1942#011validation-error:0.204 [07:29:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5 [18]#011train-error:0.189#011validation-error:0.2005 [07:29:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [19]#011train-error:0.186533#011validation-error:0.1993 [07:29:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [20]#011train-error:0.181667#011validation-error:0.1949 [07:29:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [21]#011train-error:0.181333#011validation-error:0.193 [07:29:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.1784#011validation-error:0.1922 [07:29:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.177267#011validation-error:0.1931 [07:29:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [24]#011train-error:0.174733#011validation-error:0.1904 [07:29:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.172267#011validation-error:0.1896 [07:29:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [26]#011train-error:0.1704#011validation-error:0.1866 [07:29:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [27]#011train-error:0.166933#011validation-error:0.1841 [07:29:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [28]#011train-error:0.1662#011validation-error:0.1845 [07:29:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [29]#011train-error:0.1662#011validation-error:0.184 [07:29:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.165267#011validation-error:0.1831 [07:29:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.162133#011validation-error:0.1821 [07:29:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [32]#011train-error:0.160467#011validation-error:0.1812 [07:29:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [33]#011train-error:0.159067#011validation-error:0.1804 [07:29:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [34]#011train-error:0.1566#011validation-error:0.1795 [07:30:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 16 pruned nodes, max_depth=5 [35]#011train-error:0.155333#011validation-error:0.1781 [07:30:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [36]#011train-error:0.153933#011validation-error:0.178 [07:30:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.152667#011validation-error:0.1773 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .............................Arguments: serve [2021-05-14 07:39:31 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2021-05-14 07:39:31 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-05-14 07:39:31 +0000] [1] [INFO] Using worker: gevent [2021-05-14 07:39:31 +0000] [20] [INFO] Booting worker with pid: 20 [2021-05-14 07:39:31 +0000] [21] [INFO] Booting worker with pid: 21 [2021-05-14 07:39:31 +0000] [22] [INFO] Booting worker with pid: 22 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)']. monkey.patch_all(subprocess=True) /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)']. monkey.patch_all(subprocess=True) [2021-05-14:07:39:31:INFO] Model loaded successfully for worker : 20 [2021-05-14:07:39:31:INFO] Model loaded successfully for worker : 21 [2021-05-14 07:39:31 +0000] [23] [INFO] Booting worker with pid: 23 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)']. monkey.patch_all(subprocess=True) [2021-05-14:07:39:31:INFO] Model loaded successfully for worker : 22 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)']. monkey.patch_all(subprocess=True) [2021-05-14:07:39:31:INFO] Model loaded successfully for worker : 23 2021-05-14T07:39:35.291:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2021-05-14:07:39:39:INFO] Sniff delimiter as ',' [2021-05-14:07:39:39:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:40:INFO] Sniff delimiter as ',' [2021-05-14:07:39:40:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:39:INFO] Sniff delimiter as ',' [2021-05-14:07:39:39:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:40:INFO] Sniff delimiter as ',' [2021-05-14:07:39:40:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:40:INFO] Sniff delimiter as ',' [2021-05-14:07:39:40:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:40:INFO] Sniff delimiter as ',' [2021-05-14:07:39:40:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:40:INFO] Sniff delimiter as ',' [2021-05-14:07:39:40:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:40:INFO] Sniff delimiter as ',' [2021-05-14:07:39:40:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:43:INFO] Sniff delimiter as ',' [2021-05-14:07:39:43:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:43:INFO] Sniff delimiter as ',' [2021-05-14:07:39:43:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:44:INFO] Sniff delimiter as ',' [2021-05-14:07:39:44:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:43:INFO] Sniff delimiter as ',' [2021-05-14:07:39:43:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:43:INFO] Sniff delimiter as ',' [2021-05-14:07:39:43:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:44:INFO] Sniff delimiter as ',' [2021-05-14:07:39:44:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:44:INFO] Sniff delimiter as ',' [2021-05-14:07:39:44:INFO] Sniff delimiter as ',' [2021-05-14:07:39:44:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:44:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:47:INFO] Sniff delimiter as ',' [2021-05-14:07:39:47:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:47:INFO] Sniff delimiter as ',' [2021-05-14:07:39:47:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:47:INFO] Sniff delimiter as ',' [2021-05-14:07:39:47:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:47:INFO] Sniff delimiter as ',' [2021-05-14:07:39:47:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:47:INFO] Sniff delimiter as ',' [2021-05-14:07:39:47:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:47:INFO] Sniff delimiter as ',' [2021-05-14:07:39:47:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:48:INFO] Sniff delimiter as ',' [2021-05-14:07:39:48:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:48:INFO] Sniff delimiter as ',' [2021-05-14:07:39:48:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:51:INFO] Sniff delimiter as ',' [2021-05-14:07:39:51:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:51:INFO] Sniff delimiter as ',' [2021-05-14:07:39:51:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:52:INFO] Sniff delimiter as ',' [2021-05-14:07:39:52:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:51:INFO] Sniff delimiter as ',' [2021-05-14:07:39:51:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:51:INFO] Sniff delimiter as ',' [2021-05-14:07:39:51:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:52:INFO] Sniff delimiter as ',' [2021-05-14:07:39:52:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:54:INFO] Sniff delimiter as ',' [2021-05-14:07:39:54:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:54:INFO] Sniff delimiter as ',' [2021-05-14:07:39:54:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:55:INFO] Sniff delimiter as ',' [2021-05-14:07:39:55:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:55:INFO] Sniff delimiter as ',' [2021-05-14:07:39:55:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:55:INFO] Sniff delimiter as ',' [2021-05-14:07:39:55:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:55:INFO] Sniff delimiter as ',' [2021-05-14:07:39:55:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:55:INFO] Sniff delimiter as ',' [2021-05-14:07:39:55:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:55:INFO] Sniff delimiter as ',' [2021-05-14:07:39:55:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:58:INFO] Sniff delimiter as ',' [2021-05-14:07:39:58:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:58:INFO] Sniff delimiter as ',' [2021-05-14:07:39:58:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:59:INFO] Sniff delimiter as ',' [2021-05-14:07:39:59:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:59:INFO] Sniff delimiter as ',' [2021-05-14:07:39:59:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:59:INFO] Sniff delimiter as ',' [2021-05-14:07:39:59:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:59:INFO] Sniff delimiter as ',' [2021-05-14:07:39:59:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:59:INFO] Sniff delimiter as ',' [2021-05-14:07:39:59:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:39:59:INFO] Sniff delimiter as ',' [2021-05-14:07:39:59:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/474.1 KiB (3.7 MiB/s) with 1 file(s) remaining Completed 474.1 KiB/474.1 KiB (6.6 MiB/s) with 1 file(s) remaining download: s3://sagemaker-ap-south-1-135661043022/xgboost-2021-05-14-07-34-51-563/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary # vectorizer = None # Solution: vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV # new_XV = None # Solution new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv # Solution: pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location # new_data_location = None # Solution: new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. # Solution: xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ...............................Arguments: serve [2021-05-14 07:46:26 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2021-05-14 07:46:26 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-05-14 07:46:26 +0000] [1] [INFO] Using worker: gevent [2021-05-14 07:46:26 +0000] [21] [INFO] Booting worker with pid: 21 [2021-05-14 07:46:26 +0000] [22] [INFO] Booting worker with pid: 22 [2021-05-14 07:46:26 +0000] [23] [INFO] Booting worker with pid: 23 [2021-05-14 07:46:26 +0000] [24] [INFO] Booting worker with pid: 24 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)']. monkey.patch_all(subprocess=True) /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)']. monkey.patch_all(subprocess=True) [2021-05-14:07:46:26:INFO] Model loaded successfully for worker : 21 [2021-05-14:07:46:26:INFO] Model loaded successfully for worker : 22 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)']. monkey.patch_all(subprocess=True) [2021-05-14:07:46:26:INFO] Model loaded successfully for worker : 23 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)', 'urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)']. monkey.patch_all(subprocess=True) [2021-05-14:07:46:26:INFO] Model loaded successfully for worker : 24 [2021-05-14:07:46:33:INFO] Sniff delimiter as ',' [2021-05-14:07:46:33:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:33:INFO] Sniff delimiter as ',' [2021-05-14:07:46:33:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:33:INFO] Sniff delimiter as ',' [2021-05-14:07:46:33:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:34:INFO] Sniff delimiter as ',' [2021-05-14:07:46:34:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:34:INFO] Sniff delimiter as ',' [2021-05-14:07:46:34:INFO] Determined delimiter of CSV input is ',' 2021-05-14T07:46:30.779:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2021-05-14:07:46:37:INFO] Sniff delimiter as ',' [2021-05-14:07:46:37:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:37:INFO] Sniff delimiter as ',' [2021-05-14:07:46:37:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:37:INFO] Sniff delimiter as ',' [2021-05-14:07:46:37:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:37:INFO] Sniff delimiter as ',' [2021-05-14:07:46:37:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:37:INFO] Sniff delimiter as ',' [2021-05-14:07:46:37:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:37:INFO] Sniff delimiter as ',' [2021-05-14:07:46:37:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:38:INFO] Sniff delimiter as ',' [2021-05-14:07:46:38:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:38:INFO] Sniff delimiter as ',' [2021-05-14:07:46:38:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:41:INFO] Sniff delimiter as ',' [2021-05-14:07:46:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:41:INFO] Sniff delimiter as ',' [2021-05-14:07:46:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:41:INFO] Sniff delimiter as ',' [2021-05-14:07:46:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:41:INFO] Sniff delimiter as ',' [2021-05-14:07:46:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:41:INFO] Sniff delimiter as ',' [2021-05-14:07:46:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:41:INFO] Sniff delimiter as ',' [2021-05-14:07:46:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:41:INFO] Sniff delimiter as ',' [2021-05-14:07:46:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:41:INFO] Sniff delimiter as ',' [2021-05-14:07:46:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:45:INFO] Sniff delimiter as ',' [2021-05-14:07:46:45:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:45:INFO] Sniff delimiter as ',' [2021-05-14:07:46:45:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:45:INFO] Sniff delimiter as ',' [2021-05-14:07:46:45:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:45:INFO] Sniff delimiter as ',' [2021-05-14:07:46:45:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:45:INFO] Sniff delimiter as ',' [2021-05-14:07:46:45:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:45:INFO] Sniff delimiter as ',' [2021-05-14:07:46:45:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:45:INFO] Sniff delimiter as ',' [2021-05-14:07:46:45:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:45:INFO] Sniff delimiter as ',' [2021-05-14:07:46:45:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:52:INFO] Sniff delimiter as ',' [2021-05-14:07:46:52:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:52:INFO] Sniff delimiter as ',' [2021-05-14:07:46:52:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:53:INFO] Sniff delimiter as ',' [2021-05-14:07:46:53:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:53:INFO] Sniff delimiter as ',' [2021-05-14:07:46:53:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:53:INFO] Sniff delimiter as ',' [2021-05-14:07:46:53:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:53:INFO] Sniff delimiter as ',' [2021-05-14:07:46:53:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:53:INFO] Sniff delimiter as ',' [2021-05-14:07:46:53:INFO] Determined delimiter of CSV input is ',' [2021-05-14:07:46:53:INFO] Sniff delimiter as ',' [2021-05-14:07:46:53:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/474.4 KiB (3.7 MiB/s) with 1 file(s) remaining Completed 474.4 KiB/474.4 KiB (6.4 MiB/s) with 1 file(s) remaining download: s3://sagemaker-ap-south-1-135661043022/xgboost-2021-05-14-07-41-27-061/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. # xgb_predictor = None # Solution: xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2021-05-14-07-26-05-616 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['watch', 'seri', 'dvd', 'yay', 'strike', 'fresh', 'relev', 'intrigu', 'first', 'air', 'central', 'perform', 'grip', 'script', 'layer', 'stick', 'neck', 'put', 'prison', 'show', 'win', 'new', 'fan', 'still', 'watch', 'come', '2035', 'ask', 'write', 'line', 'seem', 'imdb', 'user', 'unfriendli', 'anal', 'retent', 'code', 'ever', 'pithi', 'point', 'clearli', 'imdb', 'way', 'well', 'unlik', 'imdb', 'submiss', 'editor', 'american', 'gothic', 'understand', 'simplic', 'everyth', '22', 'episod', 'show', 'cover', 'charact', 'develop', 'mani', 'show', 'seven', 'season', 'top', 'question', 'person', 'ethic', 'strength', 'charact', 'way', 'challeng', 'viewer', 'everi', 'turn', 'ask', 'would', 'choos', 'would', 'think', 'given', 'situat', 'show', 'first', 'air', 'still', 'griev', 'twin', 'peak', 'thought', 'would', 'cheap', 'knock', 'person', 'start', 'rate', 'highli', 'suspect', 'stand', 'better', 'year', 'reckon', 'get', 'controversi', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:489: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'spill', '21st', 'playboy', 'ghetto', 'victorian', 'reincarn', 'weari'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'sophi', 'dubiou', 'masterson', 'orchestr', 'omin', 'optimist', 'banana'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. # new_data_location = None # new_val_location = None # new_train_location = None # Solution: new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. # new_xgb = None # Solution: new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. # Solution: new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # Solution: s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. # Solution: new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2021-05-14 07:57:43 Starting - Starting the training job... 2021-05-14 07:57:44 Starting - Launching requested ML instances...... 2021-05-14 07:58:56 Starting - Preparing the instances for training...... 2021-05-14 08:00:00 Downloading - Downloading input data... 2021-05-14 08:00:41 Training - Training image download completed. Training in progress..Arguments: train [2021-05-14:08:00:42:INFO] Running standalone xgboost training. [2021-05-14:08:00:42:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8415.79mb [2021-05-14:08:00:42:INFO] Determined delimiter of CSV input is ',' [08:00:42] S3DistributionType set as FullyReplicated [08:00:44] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2021-05-14:08:00:44:INFO] Determined delimiter of CSV input is ',' [08:00:44] S3DistributionType set as FullyReplicated [08:00:45] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [08:00:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.315667#011validation-error:0.3123 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [08:00:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [1]#011train-error:0.296933#011validation-error:0.2947 [08:00:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [2]#011train-error:0.292867#011validation-error:0.2906 [08:00:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.282333#011validation-error:0.2829 [08:00:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.274333#011validation-error:0.2763 [08:00:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.259067#011validation-error:0.2618 [08:00:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [6]#011train-error:0.252267#011validation-error:0.2564 [08:00:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.244533#011validation-error:0.2518 [08:01:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5 [8]#011train-error:0.2388#011validation-error:0.2511 [08:01:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [9]#011train-error:0.2368#011validation-error:0.244 [08:01:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [10]#011train-error:0.232733#011validation-error:0.242 [08:01:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 12 pruned nodes, max_depth=5 [11]#011train-error:0.2274#011validation-error:0.2376 [08:01:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [12]#011train-error:0.222467#011validation-error:0.2325 [08:01:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5 [13]#011train-error:0.219867#011validation-error:0.2284 [08:01:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [14]#011train-error:0.212533#011validation-error:0.2248 [08:01:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [15]#011train-error:0.208467#011validation-error:0.2202 [08:01:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [16]#011train-error:0.2094#011validation-error:0.2204 [08:01:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [17]#011train-error:0.204867#011validation-error:0.2168 [08:01:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.2012#011validation-error:0.2157 [08:01:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.198133#011validation-error:0.2136 [08:01:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 12 pruned nodes, max_depth=5 [20]#011train-error:0.195867#011validation-error:0.2103 [08:01:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 16 pruned nodes, max_depth=5 [21]#011train-error:0.193067#011validation-error:0.2085 [08:01:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [22]#011train-error:0.190467#011validation-error:0.2058 [08:01:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [23]#011train-error:0.187667#011validation-error:0.2046 [08:01:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5 [24]#011train-error:0.185333#011validation-error:0.2017 [08:01:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.1844#011validation-error:0.1987 [08:01:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [26]#011train-error:0.183333#011validation-error:0.198 [08:01:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [27]#011train-error:0.181933#011validation-error:0.1972 [08:01:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.178533#011validation-error:0.1956 [08:01:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [29]#011train-error:0.177067#011validation-error:0.1928 [08:01:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [30]#011train-error:0.1766#011validation-error:0.1915 [08:01:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [31]#011train-error:0.176#011validation-error:0.1925 [08:01:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.175267#011validation-error:0.1906 [08:01:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [33]#011train-error:0.173867#011validation-error:0.1886 [08:01:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [34]#011train-error:0.1718#011validation-error:0.1874 [08:01:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.170267#011validation-error:0.1868 [08:01:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.168133#011validation-error:0.1858 [08:01:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [37]#011train-error:0.166533#011validation-error:0.1849 [08:01:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 14 pruned nodes, max_depth=5 [38]#011train-error:0.1646#011validation-error:0.185 [08:01:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [39]#011train-error:0.163067#011validation-error:0.1848 [08:01:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 12 pruned nodes, max_depth=5 [40]#011train-error:0.161733#011validation-error:0.183 [08:01:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [41]#011train-error:0.1608#011validation-error:0.1816 [08:01:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [42]#011train-error:0.1608#011validation-error:0.1825 [08:01:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [43]#011train-error:0.159#011validation-error:0.1819 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model # new_xgb_transformer = None # Solution: new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. # Solution: new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ...........................Arguments: serve [2021-05-14 08:07:19 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2021-05-14 08:07:19 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-05-14 08:07:19 +0000] [1] [INFO] Using worker: gevent [2021-05-14 08:07:19 +0000] [20] [INFO] Booting worker with pid: 20 [2021-05-14 08:07:19 +0000] [21] [INFO] Booting worker with pid: 21 [2021-05-14 08:07:19 +0000] [22] [INFO] Booting worker with pid: 22 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-05-14:08:07:19:INFO] Model loaded successfully for worker : 21 [2021-05-14:08:07:19:INFO] Model loaded successfully for worker : 20 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-05-14:08:07:19:INFO] Model loaded successfully for worker : 22 [2021-05-14 08:07:19 +0000] [23] [INFO] Booting worker with pid: 23 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-05-14:08:07:19:INFO] Model loaded successfully for worker : 23 2021-05-14T08:07:23.025:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2021-05-14:08:07:26:INFO] Sniff delimiter as ',' [2021-05-14:08:07:26:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:26:INFO] Sniff delimiter as ',' [2021-05-14:08:07:26:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:26:INFO] Sniff delimiter as ',' [2021-05-14:08:07:26:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:26:INFO] Sniff delimiter as ',' [2021-05-14:08:07:26:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:26:INFO] Sniff delimiter as ',' [2021-05-14:08:07:26:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:26:INFO] Sniff delimiter as ',' [2021-05-14:08:07:26:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:26:INFO] Sniff delimiter as ',' [2021-05-14:08:07:26:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:26:INFO] Sniff delimiter as ',' [2021-05-14:08:07:26:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:29:INFO] Sniff delimiter as ',' [2021-05-14:08:07:29:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:29:INFO] Sniff delimiter as ',' [2021-05-14:08:07:29:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:29:INFO] Sniff delimiter as ',' [2021-05-14:08:07:29:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:30:INFO] Sniff delimiter as ',' [2021-05-14:08:07:30:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:30:INFO] Sniff delimiter as ',' [2021-05-14:08:07:30:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:29:INFO] Sniff delimiter as ',' [2021-05-14:08:07:29:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:30:INFO] Sniff delimiter as ',' [2021-05-14:08:07:30:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:30:INFO] Sniff delimiter as ',' [2021-05-14:08:07:30:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:34:INFO] Sniff delimiter as ',' [2021-05-14:08:07:34:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:34:INFO] Sniff delimiter as ',' [2021-05-14:08:07:34:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:37:INFO] Sniff delimiter as ',' [2021-05-14:08:07:37:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:37:INFO] Sniff delimiter as ',' [2021-05-14:08:07:37:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:37:INFO] Sniff delimiter as ',' [2021-05-14:08:07:37:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:37:INFO] Sniff delimiter as ',' [2021-05-14:08:07:37:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:37:INFO] Sniff delimiter as ',' [2021-05-14:08:07:37:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:37:INFO] Sniff delimiter as ',' [2021-05-14:08:07:37:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:38:INFO] Sniff delimiter as ',' [2021-05-14:08:07:38:INFO] Sniff delimiter as ',' [2021-05-14:08:07:38:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:38:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:41:INFO] Sniff delimiter as ',' [2021-05-14:08:07:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:41:INFO] Sniff delimiter as ',' [2021-05-14:08:07:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:41:INFO] Sniff delimiter as ',' [2021-05-14:08:07:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:41:INFO] Sniff delimiter as ',' [2021-05-14:08:07:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:41:INFO] Sniff delimiter as ',' [2021-05-14:08:07:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:41:INFO] Sniff delimiter as ',' [2021-05-14:08:07:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:41:INFO] Sniff delimiter as ',' [2021-05-14:08:07:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:41:INFO] Sniff delimiter as ',' [2021-05-14:08:07:41:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:49:INFO] Sniff delimiter as ',' [2021-05-14:08:07:49:INFO] Sniff delimiter as ',' [2021-05-14:08:07:49:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:49:INFO] Sniff delimiter as ',' [2021-05-14:08:07:49:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:49:INFO] Sniff delimiter as ',' [2021-05-14:08:07:49:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:49:INFO] Sniff delimiter as ',' [2021-05-14:08:07:49:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:49:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:49:INFO] Sniff delimiter as ',' [2021-05-14:08:07:49:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:49:INFO] Sniff delimiter as ',' [2021-05-14:08:07:49:INFO] Determined delimiter of CSV input is ',' [2021-05-14:08:07:49:INFO] Sniff delimiter as ',' [2021-05-14:08:07:49:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/469.5 KiB (2.8 MiB/s) with 1 file(s) remaining Completed 469.5 KiB/469.5 KiB (4.7 MiB/s) with 1 file(s) remaining download: s3://sagemaker-ap-south-1-135661043022/xgboost-2021-05-14-08-02-56-099/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. # test_X = None # Solution: test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. # new_xgb_endpoint_config_name = None # Solution: new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. # new_xgb_endpoint_config_info = None # Solution: new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. # Solution: session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output -------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-05-01 18:19:41-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 22.7MB/s in 4.1s 2020-05-01 18:19:46 (19.4 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-05-01 18:29:25 Starting - Starting the training job... 2020-05-01 18:29:26 Starting - Launching requested ML instances...... 2020-05-01 18:30:50 Starting - Preparing the instances for training...... 2020-05-01 18:31:39 Downloading - Downloading input data... 2020-05-01 18:32:13 Training - Downloading the training image..Arguments: train [2020-05-01:18:32:33:INFO] Running standalone xgboost training. [2020-05-01:18:32:33:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8488.65mb [2020-05-01:18:32:33:INFO] Determined delimiter of CSV input is ',' [18:32:33] S3DistributionType set as FullyReplicated [18:32:34] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-05-01:18:32:34:INFO] Determined delimiter of CSV input is ',' [18:32:34] S3DistributionType set as FullyReplicated [18:32:35] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [18:32:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 2 pruned nodes, max_depth=5 [0]#011train-error:0.299533#011validation-error:0.3025 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [18:32:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [1]#011train-error:0.281733#011validation-error:0.2809 [18:32:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.2786#011validation-error:0.2777 [18:32:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 2020-05-01 18:32:32 Training - Training image download completed. Training in progress.[3]#011train-error:0.270067#011validation-error:0.2712 [18:32:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [4]#011train-error:0.259933#011validation-error:0.2625 [18:32:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.246133#011validation-error:0.2488 [18:32:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.2408#011validation-error:0.2474 [18:32:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 8 pruned nodes, max_depth=5 [7]#011train-error:0.234133#011validation-error:0.2446 [18:32:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [8]#011train-error:0.2282#011validation-error:0.2373 [18:32:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.222067#011validation-error:0.2342 [18:32:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 0 pruned nodes, max_depth=5 [10]#011train-error:0.213733#011validation-error:0.227 [18:32:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [11]#011train-error:0.211267#011validation-error:0.223 [18:32:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.208533#011validation-error:0.2218 [18:32:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [13]#011train-error:0.205667#011validation-error:0.2198 [18:32:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [14]#011train-error:0.2056#011validation-error:0.2169 [18:32:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [15]#011train-error:0.1988#011validation-error:0.2134 [18:33:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.196933#011validation-error:0.213 [18:33:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [17]#011train-error:0.194#011validation-error:0.2104 [18:33:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [18]#011train-error:0.191933#011validation-error:0.2094 [18:33:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.1872#011validation-error:0.2046 [18:33:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [20]#011train-error:0.184867#011validation-error:0.2016 [18:33:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.1808#011validation-error:0.1994 [18:33:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.178733#011validation-error:0.1973 [18:33:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.176133#011validation-error:0.1959 [18:33:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 16 pruned nodes, max_depth=5 [24]#011train-error:0.174533#011validation-error:0.1951 [18:33:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.169933#011validation-error:0.1921 [18:33:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [26]#011train-error:0.168133#011validation-error:0.1906 [18:33:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [27]#011train-error:0.167667#011validation-error:0.1903 [18:33:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.1652#011validation-error:0.1886 [18:33:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [29]#011train-error:0.1634#011validation-error:0.1872 [18:33:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5 [30]#011train-error:0.161867#011validation-error:0.1847 [18:33:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.161#011validation-error:0.1862 [18:33:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.1594#011validation-error:0.1838 [18:33:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [33]#011train-error:0.1578#011validation-error:0.1825 [18:33:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.157133#011validation-error:0.182 [18:33:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [35]#011train-error:0.157267#011validation-error:0.182 [18:33:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.1562#011validation-error:0.1818 [18:33:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [37]#011train-error:0.154333#011validation-error:0.1804 [18:33:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5 [38]#011train-error:0.152067#011validation-error:0.1795 [18:33:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.150667#011validation-error:0.1792 [18:33:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [40]#011train-error:0.149733#011validation-error:0.1786 [18:33:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [41]#011train-error:0.148333#011validation-error:0.1772 [18:33:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [42]#011train-error:0.147#011validation-error:0.1769 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ...................Arguments: serve [2020-05-01 18:40:42 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-01 18:40:42 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-01 18:40:42 +0000] [1] [INFO] Using worker: gevent [2020-05-01 18:40:42 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-01 18:40:42 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-01 18:40:42 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-01 18:40:42 +0000] [41] [INFO] Booting worker with pid: 41 [2020-05-01:18:40:42:INFO] Model loaded successfully for worker : 38 [2020-05-01:18:40:42:INFO] Model loaded successfully for worker : 39 [2020-05-01:18:40:42:INFO] Model loaded successfully for worker : 40 [2020-05-01:18:40:42:INFO] Model loaded successfully for worker : 41 2020-05-01T18:41:11.497:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-01:18:41:14:INFO] Sniff delimiter as ',' [2020-05-01:18:41:14:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:14:INFO] Sniff delimiter as ',' [2020-05-01:18:41:14:INFO] Sniff delimiter as ',' [2020-05-01:18:41:14:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:14:INFO] Sniff delimiter as ',' [2020-05-01:18:41:14:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:14:INFO] Sniff delimiter as ',' [2020-05-01:18:41:14:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:14:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:14:INFO] Sniff delimiter as ',' [2020-05-01:18:41:14:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:16:INFO] Sniff delimiter as ',' [2020-05-01:18:41:16:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:16:INFO] Sniff delimiter as ',' [2020-05-01:18:41:16:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:16:INFO] Sniff delimiter as ',' [2020-05-01:18:41:16:INFO] Sniff delimiter as ',' [2020-05-01:18:41:16:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:16:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:17:INFO] Sniff delimiter as ',' [2020-05-01:18:41:17:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:17:INFO] Sniff delimiter as ',' [2020-05-01:18:41:17:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:17:INFO] Sniff delimiter as ',' [2020-05-01:18:41:17:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:17:INFO] Sniff delimiter as ',' [2020-05-01:18:41:17:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:17:INFO] Sniff delimiter as ',' [2020-05-01:18:41:17:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:17:INFO] Sniff delimiter as ',' [2020-05-01:18:41:17:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:17:INFO] Sniff delimiter as ',' [2020-05-01:18:41:17:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:17:INFO] Sniff delimiter as ',' [2020-05-01:18:41:17:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:19:INFO] Sniff delimiter as ',' [2020-05-01:18:41:19:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:19:INFO] Sniff delimiter as ',' [2020-05-01:18:41:19:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:20:INFO] Sniff delimiter as ',' [2020-05-01:18:41:20:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:19:INFO] Sniff delimiter as ',' [2020-05-01:18:41:19:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:19:INFO] Sniff delimiter as ',' [2020-05-01:18:41:19:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:20:INFO] Sniff delimiter as ',' [2020-05-01:18:41:20:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:20:INFO] Sniff delimiter as ',' [2020-05-01:18:41:20:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:20:INFO] Sniff delimiter as ',' [2020-05-01:18:41:20:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:22:INFO] Sniff delimiter as ',' [2020-05-01:18:41:22:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:22:INFO] Sniff delimiter as ',' [2020-05-01:18:41:22:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:22:INFO] Sniff delimiter as ',' [2020-05-01:18:41:22:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:22:INFO] Sniff delimiter as ',' [2020-05-01:18:41:22:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:22:INFO] Sniff delimiter as ',' [2020-05-01:18:41:22:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:22:INFO] Sniff delimiter as ',' [2020-05-01:18:41:22:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:22:INFO] Sniff delimiter as ',' [2020-05-01:18:41:22:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:22:INFO] Sniff delimiter as ',' [2020-05-01:18:41:22:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:27:INFO] Sniff delimiter as ',' [2020-05-01:18:41:27:INFO] Sniff delimiter as ',' [2020-05-01:18:41:27:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:27:INFO] Sniff delimiter as ',' [2020-05-01:18:41:27:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:27:INFO] Sniff delimiter as ',' [2020-05-01:18:41:27:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:27:INFO] Sniff delimiter as ',' [2020-05-01:18:41:27:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:27:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:27:INFO] Sniff delimiter as ',' [2020-05-01:18:41:27:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:27:INFO] Sniff delimiter as ',' [2020-05-01:18:41:27:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:27:INFO] Sniff delimiter as ',' [2020-05-01:18:41:27:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:29:INFO] Sniff delimiter as ',' [2020-05-01:18:41:29:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:29:INFO] Sniff delimiter as ',' [2020-05-01:18:41:29:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:29:INFO] Sniff delimiter as ',' [2020-05-01:18:41:29:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:30:INFO] Sniff delimiter as ',' [2020-05-01:18:41:30:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:29:INFO] Sniff delimiter as ',' [2020-05-01:18:41:29:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:29:INFO] Sniff delimiter as ',' [2020-05-01:18:41:29:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:29:INFO] Sniff delimiter as ',' [2020-05-01:18:41:29:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:30:INFO] Sniff delimiter as ',' [2020-05-01:18:41:30:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:31:INFO] Sniff delimiter as ',' [2020-05-01:18:41:31:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:32:INFO] Sniff delimiter as ',' [2020-05-01:18:41:32:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:31:INFO] Sniff delimiter as ',' [2020-05-01:18:41:31:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:32:INFO] Sniff delimiter as ',' [2020-05-01:18:41:32:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:32:INFO] Sniff delimiter as ',' [2020-05-01:18:41:32:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:32:INFO] Sniff delimiter as ',' [2020-05-01:18:41:32:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:32:INFO] Sniff delimiter as ',' [2020-05-01:18:41:32:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:41:32:INFO] Sniff delimiter as ',' [2020-05-01:18:41:32:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.2 KiB (3.1 MiB/s) with 1 file(s) remaining Completed 370.2 KiB/370.2 KiB (4.4 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-037690205935/xgboost-2020-05-01-18-37-35-503/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary # vectorizer = None # Solution: vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV # new_XV = None # Solution new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv # Solution: pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location #new_data_location = None # Solution: new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. # Solution: xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ....................Arguments: serve [2020-05-01 18:55:19 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-01 18:55:19 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-01 18:55:19 +0000] [1] [INFO] Using worker: gevent [2020-05-01 18:55:19 +0000] [37] [INFO] Booting worker with pid: 37 [2020-05-01 18:55:19 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-01 18:55:19 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-01:18:55:19:INFO] Model loaded successfully for worker : 37 [2020-05-01 18:55:19 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-01:18:55:19:INFO] Model loaded successfully for worker : 38 [2020-05-01:18:55:19:INFO] Model loaded successfully for worker : 40 [2020-05-01:18:55:19:INFO] Model loaded successfully for worker : 39 2020-05-01T18:55:41.738:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-01:18:55:44:INFO] Sniff delimiter as ',' [2020-05-01:18:55:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:44:INFO] Sniff delimiter as ',' [2020-05-01:18:55:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:44:INFO] Sniff delimiter as ',' [2020-05-01:18:55:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:44:INFO] Sniff delimiter as ',' [2020-05-01:18:55:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:44:INFO] Sniff delimiter as ',' [2020-05-01:18:55:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:44:INFO] Sniff delimiter as ',' [2020-05-01:18:55:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:45:INFO] Sniff delimiter as ',' [2020-05-01:18:55:45:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:45:INFO] Sniff delimiter as ',' [2020-05-01:18:55:45:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:46:INFO] Sniff delimiter as ',' [2020-05-01:18:55:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:47:INFO] Sniff delimiter as ',' [2020-05-01:18:55:47:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:47:INFO] Sniff delimiter as ',' [2020-05-01:18:55:47:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:46:INFO] Sniff delimiter as ',' [2020-05-01:18:55:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:47:INFO] Sniff delimiter as ',' [2020-05-01:18:55:47:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:47:INFO] Sniff delimiter as ',' [2020-05-01:18:55:47:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:48:INFO] Sniff delimiter as ',' [2020-05-01:18:55:48:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:48:INFO] Sniff delimiter as ',' [2020-05-01:18:55:48:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:49:INFO] Sniff delimiter as ',' [2020-05-01:18:55:49:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:49:INFO] Sniff delimiter as ',' [2020-05-01:18:55:49:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:49:INFO] Sniff delimiter as ',' [2020-05-01:18:55:49:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:49:INFO] Sniff delimiter as ',' [2020-05-01:18:55:49:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:49:INFO] Sniff delimiter as ',' [2020-05-01:18:55:49:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:49:INFO] Sniff delimiter as ',' [2020-05-01:18:55:49:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:51:INFO] Sniff delimiter as ',' [2020-05-01:18:55:51:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:51:INFO] Sniff delimiter as ',' [2020-05-01:18:55:51:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:51:INFO] Sniff delimiter as ',' [2020-05-01:18:55:51:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:51:INFO] Sniff delimiter as ',' [2020-05-01:18:55:51:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:51:INFO] Sniff delimiter as ',' [2020-05-01:18:55:51:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:51:INFO] Sniff delimiter as ',' [2020-05-01:18:55:51:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:52:INFO] Sniff delimiter as ',' [2020-05-01:18:55:52:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:52:INFO] Sniff delimiter as ',' [2020-05-01:18:55:52:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:53:INFO] Sniff delimiter as ',' [2020-05-01:18:55:53:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:53:INFO] Sniff delimiter as ',' [2020-05-01:18:55:53:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:54:INFO] Sniff delimiter as ',' [2020-05-01:18:55:53:INFO] Sniff delimiter as ',' [2020-05-01:18:55:53:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:53:INFO] Sniff delimiter as ',' [2020-05-01:18:55:53:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:54:INFO] Sniff delimiter as ',' [2020-05-01:18:55:54:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:54:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:55:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:55:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:55:INFO] Sniff delimiter as ',' [2020-05-01:18:55:55:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:55:INFO] Sniff delimiter as ',' [2020-05-01:18:55:55:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:56:INFO] Sniff delimiter as ',' [2020-05-01:18:55:56:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:56:INFO] Sniff delimiter as ',' [2020-05-01:18:55:56:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:57:INFO] Sniff delimiter as ',' [2020-05-01:18:55:57:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:57:INFO] Sniff delimiter as ',' [2020-05-01:18:55:57:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:56:INFO] Sniff delimiter as ',' [2020-05-01:18:55:56:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:56:INFO] Sniff delimiter as ',' [2020-05-01:18:55:56:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:57:INFO] Sniff delimiter as ',' [2020-05-01:18:55:57:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:57:INFO] Sniff delimiter as ',' [2020-05-01:18:55:57:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:59:INFO] Sniff delimiter as ',' [2020-05-01:18:55:59:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:59:INFO] Sniff delimiter as ',' [2020-05-01:18:55:59:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:59:INFO] Sniff delimiter as ',' [2020-05-01:18:55:59:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:59:INFO] Sniff delimiter as ',' [2020-05-01:18:55:59:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:59:INFO] Sniff delimiter as ',' [2020-05-01:18:55:59:INFO] Sniff delimiter as ',' [2020-05-01:18:55:59:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:59:INFO] Sniff delimiter as ',' [2020-05-01:18:55:59:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:59:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:55:59:INFO] Sniff delimiter as ',' [2020-05-01:18:55:59:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:56:01:INFO] Sniff delimiter as ',' [2020-05-01:18:56:01:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:56:01:INFO] Sniff delimiter as ',' [2020-05-01:18:56:01:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:56:01:INFO] Sniff delimiter as ',' [2020-05-01:18:56:01:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:56:02:INFO] Sniff delimiter as ',' [2020-05-01:18:56:02:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:56:02:INFO] Sniff delimiter as ',' [2020-05-01:18:56:02:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:56:01:INFO] Sniff delimiter as ',' [2020-05-01:18:56:01:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:56:02:INFO] Sniff delimiter as ',' [2020-05-01:18:56:02:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:56:02:INFO] Sniff delimiter as ',' [2020-05-01:18:56:02:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:56:04:INFO] Sniff delimiter as ',' [2020-05-01:18:56:04:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:56:04:INFO] Sniff delimiter as ',' [2020-05-01:18:56:04:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:56:04:INFO] Sniff delimiter as ',' [2020-05-01:18:56:04:INFO] Determined delimiter of CSV input is ',' [2020-05-01:18:56:04:INFO] Sniff delimiter as ',' [2020-05-01:18:56:04:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.5 KiB (3.0 MiB/s) with 1 file(s) remaining Completed 370.5 KiB/370.5 KiB (4.2 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-037690205935/xgboost-2020-05-01-18-52-07-586/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. #xgb_predictor = None # Solution: xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-05-01-18-29-25-152 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['expect', 'love', 'movi', 'film', 'noir', 'serial', 'killer', 'dark', 'ironi', 'baffl', 'mani', 'choic', 'charact', 'made', 'hey', 'know', 'creepi', 'look', 'let', 'hook', 'cross', 'countri', 'road', 'trip', 'anyway', 'found', 'pace', 'glacial', 'emphasi', 'moodi', 'light', 'take', 'place', 'origin', 'thought', 'director', 'cinematograph', 'think', 'would', 'much', 'better', 'movi', 'someon', 'run', 'script', 'common', 'sens', 'meter', '1992', 'model', 'start', 'film'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'21st', 'weari', 'playboy', 'reincarn', 'spill', 'victorian', 'ghetto'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'omin', 'optimist', 'banana', 'sophi', 'masterson', 'dubiou', 'orchestr'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. #new_data_location = None #new_val_location = None #new_train_location = None # Solution: new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. # new_xgb = None # Solution: new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. # Solution: new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # Solution: s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. # Solution: new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-05-01 19:16:14 Starting - Starting the training job... 2020-05-01 19:16:17 Starting - Launching requested ML instances... 2020-05-01 19:17:12 Starting - Preparing the instances for training...... 2020-05-01 19:17:59 Downloading - Downloading input data... 2020-05-01 19:18:35 Training - Training image download completed. Training in progress.Arguments: train [2020-05-01:19:18:35:INFO] Running standalone xgboost training. [2020-05-01:19:18:35:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8479.96mb [2020-05-01:19:18:35:INFO] Determined delimiter of CSV input is ',' [19:18:35] S3DistributionType set as FullyReplicated [19:18:37] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-05-01:19:18:37:INFO] Determined delimiter of CSV input is ',' [19:18:37] S3DistributionType set as FullyReplicated [19:18:38] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [19:18:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.302733#011validation-error:0.3006 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [19:18:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 2 pruned nodes, max_depth=5 [1]#011train-error:0.289067#011validation-error:0.2872 [19:18:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.286067#011validation-error:0.2848 [19:18:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.2828#011validation-error:0.2824 [19:18:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [4]#011train-error:0.272933#011validation-error:0.2724 [19:18:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [5]#011train-error:0.260733#011validation-error:0.2621 [19:18:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [6]#011train-error:0.255667#011validation-error:0.2587 [19:18:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.247#011validation-error:0.2525 [19:18:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.239933#011validation-error:0.242 [19:18:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.234667#011validation-error:0.2402 [19:18:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.2364#011validation-error:0.2391 [19:18:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.234333#011validation-error:0.2389 [19:18:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5 [12]#011train-error:0.227#011validation-error:0.236 [19:18:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 0 pruned nodes, max_depth=5 [13]#011train-error:0.221#011validation-error:0.2289 [19:19:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.217667#011validation-error:0.2254 [19:19:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [15]#011train-error:0.2128#011validation-error:0.2215 [19:19:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [16]#011train-error:0.2086#011validation-error:0.2187 [19:19:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 2 pruned nodes, max_depth=5 [17]#011train-error:0.2064#011validation-error:0.219 [19:19:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [18]#011train-error:0.204#011validation-error:0.2195 [19:19:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [19]#011train-error:0.201667#011validation-error:0.2148 [19:19:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [20]#011train-error:0.1986#011validation-error:0.2116 [19:19:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.196#011validation-error:0.2085 [19:19:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.193933#011validation-error:0.2074 [19:19:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.191933#011validation-error:0.2062 [19:19:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [24]#011train-error:0.189#011validation-error:0.2048 [19:19:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [25]#011train-error:0.185267#011validation-error:0.2013 [19:19:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [26]#011train-error:0.1828#011validation-error:0.2 [19:19:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [27]#011train-error:0.1804#011validation-error:0.199 [19:19:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [28]#011train-error:0.1822#011validation-error:0.2009 [19:19:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [29]#011train-error:0.181#011validation-error:0.1989 [19:19:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [30]#011train-error:0.1786#011validation-error:0.1985 [19:19:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.178#011validation-error:0.1975 [19:19:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.1772#011validation-error:0.1968 [19:19:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 16 pruned nodes, max_depth=5 [33]#011train-error:0.175267#011validation-error:0.1942 [19:19:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.171933#011validation-error:0.1916 [19:19:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 6 pruned nodes, max_depth=5 [35]#011train-error:0.1706#011validation-error:0.1892 [19:19:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 16 pruned nodes, max_depth=5 [36]#011train-error:0.169533#011validation-error:0.1895 [19:19:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.168133#011validation-error:0.1891 [19:19:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.1672#011validation-error:0.1894 [19:19:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.165067#011validation-error:0.1879 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model # new_xgb_transformer = None # Solution: new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. # Solution: new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ....................Arguments: serve [2020-05-01 19:25:18 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-01 19:25:18 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-01 19:25:18 +0000] [1] [INFO] Using worker: gevent [2020-05-01 19:25:18 +0000] [37] [INFO] Booting worker with pid: 37 [2020-05-01 19:25:18 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-01:19:25:18:INFO] Model loaded successfully for worker : 37 [2020-05-01 19:25:18 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-01 19:25:18 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-01:19:25:18:INFO] Model loaded successfully for worker : 38 [2020-05-01:19:25:18:INFO] Model loaded successfully for worker : 39 [2020-05-01:19:25:18:INFO] Model loaded successfully for worker : 40 [2020-05-01:19:25:36:INFO] Sniff delimiter as ',' [2020-05-01:19:25:36:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:36:INFO] Sniff delimiter as ',' [2020-05-01:19:25:36:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:36:INFO] Sniff delimiter as ',' [2020-05-01:19:25:36:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:36:INFO] Sniff delimiter as ',' [2020-05-01:19:25:36:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:36:INFO] Sniff delimiter as ',' [2020-05-01:19:25:36:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:36:INFO] Sniff delimiter as ',' [2020-05-01:19:25:36:INFO] Determined delimiter of CSV input is ',' 2020-05-01T19:25:33.426:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-01:19:25:38:INFO] Sniff delimiter as ',' [2020-05-01:19:25:38:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:38:INFO] Sniff delimiter as ',' [2020-05-01:19:25:38:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:38:INFO] Sniff delimiter as ',' [2020-05-01:19:25:38:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:38:INFO] Sniff delimiter as ',' [2020-05-01:19:25:38:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:38:INFO] Sniff delimiter as ',' [2020-05-01:19:25:38:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:38:INFO] Sniff delimiter as ',' [2020-05-01:19:25:38:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:39:INFO] Sniff delimiter as ',' [2020-05-01:19:25:39:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:39:INFO] Sniff delimiter as ',' [2020-05-01:19:25:39:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:39:INFO] Sniff delimiter as ',' [2020-05-01:19:25:39:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:39:INFO] Sniff delimiter as ',' [2020-05-01:19:25:39:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:41:INFO] Sniff delimiter as ',' [2020-05-01:19:25:41:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:42:INFO] Sniff delimiter as ',' [2020-05-01:19:25:42:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:41:INFO] Sniff delimiter as ',' [2020-05-01:19:25:41:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:42:INFO] Sniff delimiter as ',' [2020-05-01:19:25:42:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:42:INFO] Sniff delimiter as ',' [2020-05-01:19:25:42:INFO] Sniff delimiter as ',' [2020-05-01:19:25:42:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:43:INFO] Sniff delimiter as ',' [2020-05-01:19:25:43:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:42:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:43:INFO] Sniff delimiter as ',' [2020-05-01:19:25:43:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:44:INFO] Sniff delimiter as ',' [2020-05-01:19:25:44:INFO] Sniff delimiter as ',' [2020-05-01:19:25:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:44:INFO] Sniff delimiter as ',' [2020-05-01:19:25:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:44:INFO] Sniff delimiter as ',' [2020-05-01:19:25:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:45:INFO] Sniff delimiter as ',' [2020-05-01:19:25:45:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:45:INFO] Sniff delimiter as ',' [2020-05-01:19:25:45:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:45:INFO] Sniff delimiter as ',' [2020-05-01:19:25:45:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:45:INFO] Sniff delimiter as ',' [2020-05-01:19:25:45:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:46:INFO] Sniff delimiter as ',' [2020-05-01:19:25:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:46:INFO] Sniff delimiter as ',' [2020-05-01:19:25:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:46:INFO] Sniff delimiter as ',' [2020-05-01:19:25:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:46:INFO] Sniff delimiter as ',' [2020-05-01:19:25:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:47:INFO] Sniff delimiter as ',' [2020-05-01:19:25:47:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:48:INFO] Sniff delimiter as ',' [2020-05-01:19:25:48:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:47:INFO] Sniff delimiter as ',' [2020-05-01:19:25:47:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:48:INFO] Sniff delimiter as ',' [2020-05-01:19:25:48:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:49:INFO] Sniff delimiter as ',' [2020-05-01:19:25:49:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:49:INFO] Sniff delimiter as ',' [2020-05-01:19:25:49:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:49:INFO] Sniff delimiter as ',' [2020-05-01:19:25:49:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:50:INFO] Sniff delimiter as ',' [2020-05-01:19:25:50:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:49:INFO] Sniff delimiter as ',' [2020-05-01:19:25:49:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:50:INFO] Sniff delimiter as ',' [2020-05-01:19:25:50:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:52:INFO] Sniff delimiter as ',' [2020-05-01:19:25:52:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:52:INFO] Sniff delimiter as ',' [2020-05-01:19:25:52:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:52:INFO] Sniff delimiter as ',' [2020-05-01:19:25:52:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:53:INFO] Sniff delimiter as ',' [2020-05-01:19:25:53:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:52:INFO] Sniff delimiter as ',' [2020-05-01:19:25:52:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:53:INFO] Sniff delimiter as ',' [2020-05-01:19:25:53:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:54:INFO] Sniff delimiter as ',' [2020-05-01:19:25:54:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:54:INFO] Sniff delimiter as ',' [2020-05-01:19:25:54:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:54:INFO] Sniff delimiter as ',' [2020-05-01:19:25:54:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:55:INFO] Sniff delimiter as ',' [2020-05-01:19:25:55:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:54:INFO] Sniff delimiter as ',' [2020-05-01:19:25:54:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:55:INFO] Sniff delimiter as ',' [2020-05-01:19:25:55:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:55:INFO] Sniff delimiter as ',' [2020-05-01:19:25:55:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:55:INFO] Sniff delimiter as ',' [2020-05-01:19:25:55:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:56:INFO] Sniff delimiter as ',' [2020-05-01:19:25:56:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:56:INFO] Sniff delimiter as ',' [2020-05-01:19:25:56:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:56:INFO] Sniff delimiter as ',' [2020-05-01:19:25:56:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:56:INFO] Sniff delimiter as ',' [2020-05-01:19:25:56:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:57:INFO] Sniff delimiter as ',' [2020-05-01:19:25:57:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:58:INFO] Sniff delimiter as ',' [2020-05-01:19:25:58:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:57:INFO] Sniff delimiter as ',' [2020-05-01:19:25:57:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:58:INFO] Sniff delimiter as ',' [2020-05-01:19:25:58:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:59:INFO] Sniff delimiter as ',' [2020-05-01:19:25:59:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:59:INFO] Sniff delimiter as ',' [2020-05-01:19:25:59:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:59:INFO] Sniff delimiter as ',' [2020-05-01:19:25:59:INFO] Determined delimiter of CSV input is ',' [2020-05-01:19:25:59:INFO] Sniff delimiter as ',' [2020-05-01:19:25:59:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.6 KiB (2.1 MiB/s) with 1 file(s) remaining Completed 366.6 KiB/366.6 KiB (3.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-037690205935/xgboost-2020-05-01-19-22-12-901/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. # test_X = None # Solution: test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. # new_xgb_endpoint_config_name = None # Solution: new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. # new_xgb_endpoint_config_info = None # Solution: new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. # Solution: session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output -----------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-05-31 21:55:15-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 9.01MB/s in 13s 2020-05-31 21:55:29 (6.02 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output _____no_output_____ ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output _____no_output_____ ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output _____no_output_____ ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.fit_transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'),key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. ###Code # Make sure that we use SageMaker 1.x !pip install sagemaker==1.72.0 ###Output _____no_output_____ ###Markdown Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output _____no_output_____ ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([pd.DataFrame(val_y), pd.DataFrame(val_X)], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X)], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output _____no_output_____ ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output _____no_output_____ ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. ###Output _____no_output_____ ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None ###Output _____no_output_____ ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output _____no_output_____ ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output _____no_output_____ ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output _____no_output_____ ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-10-09 08:41:08-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 25.4MB/s in 3.8s 2020-10-09 08:41:12 (21.4 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib #import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays # import numpy as np # from sklearn.feature_extraction.text import CountVectorizer # from sklearn.externals import joblib def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-10-09 08:45:16 Starting - Starting the training job... 2020-10-09 08:45:19 Starting - Launching requested ML instances...... 2020-10-09 08:46:33 Starting - Preparing the instances for training...... 2020-10-09 08:47:44 Downloading - Downloading input data 2020-10-09 08:47:44 Training - Downloading the training image... 2020-10-09 08:48:03 Training - Training image download completed. Training in progress.Arguments: train [2020-10-09:08:48:03:INFO] Running standalone xgboost training. [2020-10-09:08:48:03:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8469.08mb [2020-10-09:08:48:03:INFO] Determined delimiter of CSV input is ',' [08:48:03] S3DistributionType set as FullyReplicated [08:48:06] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-10-09:08:48:06:INFO] Determined delimiter of CSV input is ',' [08:48:06] S3DistributionType set as FullyReplicated [08:48:07] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [08:48:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [0]#011train-error:0.2976#011validation-error:0.2974 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [08:48:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [1]#011train-error:0.283333#011validation-error:0.2858 [08:48:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.280467#011validation-error:0.2836 [08:48:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [3]#011train-error:0.279#011validation-error:0.2793 [08:48:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.2722#011validation-error:0.2736 [08:48:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.269267#011validation-error:0.2719 [08:48:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.2562#011validation-error:0.2572 [08:48:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.250267#011validation-error:0.2488 [08:48:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 0 pruned nodes, max_depth=5 [8]#011train-error:0.236533#011validation-error:0.2384 [08:48:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.231733#011validation-error:0.2319 [08:48:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [10]#011train-error:0.225067#011validation-error:0.2271 [08:48:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [11]#011train-error:0.2178#011validation-error:0.2248 [08:48:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.213333#011validation-error:0.224 [08:48:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.210867#011validation-error:0.2226 [08:48:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [14]#011train-error:0.2076#011validation-error:0.22 [08:48:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [15]#011train-error:0.205667#011validation-error:0.2146 [08:48:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [16]#011train-error:0.2038#011validation-error:0.2146 [08:48:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [17]#011train-error:0.198667#011validation-error:0.2098 [08:48:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 12 pruned nodes, max_depth=5 [18]#011train-error:0.191533#011validation-error:0.2083 [08:48:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [19]#011train-error:0.1892#011validation-error:0.2049 [08:48:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [20]#011train-error:0.184533#011validation-error:0.2029 [08:48:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [21]#011train-error:0.183467#011validation-error:0.2022 [08:48:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.182067#011validation-error:0.1997 [08:48:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [23]#011train-error:0.179067#011validation-error:0.1962 [08:48:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.176867#011validation-error:0.1948 [08:48:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.175067#011validation-error:0.1928 [08:48:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [26]#011train-error:0.1704#011validation-error:0.1913 [08:48:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5 [27]#011train-error:0.169533#011validation-error:0.1886 [08:48:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.1672#011validation-error:0.1888 [08:48:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [29]#011train-error:0.1658#011validation-error:0.1869 [08:48:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [30]#011train-error:0.1638#011validation-error:0.1864 [08:48:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [31]#011train-error:0.164067#011validation-error:0.1855 [08:48:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.161267#011validation-error:0.1828 [08:48:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [33]#011train-error:0.1604#011validation-error:0.1818 [08:48:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5 [34]#011train-error:0.1602#011validation-error:0.1811 [08:48:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.158733#011validation-error:0.1786 [08:48:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.158#011validation-error:0.1797 [08:48:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [37]#011train-error:0.156#011validation-error:0.1784 [08:48:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5 [38]#011train-error:0.154733#011validation-error:0.1759 [08:49:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [39]#011train-error:0.153933#011validation-error:0.176 [08:49:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [40]#011train-error:0.153733#011validation-error:0.1759 [08:49:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [41]#011train-error:0.152733#011validation-error:0.1746 [08:49:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [42]#011train-error:0.152333#011validation-error:0.174 [08:49:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [43]#011train-error:0.15#011validation-error:0.1723 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .................................2020-10-09T08:55:53.007:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-10-09 08:55:52 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-10-09 08:55:52 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-10-09 08:55:52 +0000] [1] [INFO] Using worker: gevent [2020-10-09 08:55:52 +0000] [37] [INFO] Booting worker with pid: 37 [2020-10-09 08:55:52 +0000] [38] [INFO] Booting worker with pid: 38 [2020-10-09 08:55:52 +0000] [39] [INFO] Booting worker with pid: 39 [2020-10-09 08:55:52 +0000] [40] [INFO] Booting worker with pid: 40 [2020-10-09:08:55:52:INFO] Model loaded successfully for worker : 37 Arguments: serve [2020-10-09 08:55:52 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-10-09 08:55:52 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-10-09 08:55:52 +0000] [1] [INFO] Using worker: gevent [2020-10-09 08:55:52 +0000] [37] [INFO] Booting worker with pid: 37 [2020-10-09 08:55:52 +0000] [38] [INFO] Booting worker with pid: 38 [2020-10-09 08:55:52 +0000] [39] [INFO] Booting worker with pid: 39 [2020-10-09 08:55:52 +0000] [40] [INFO] Booting worker with pid: 40 [2020-10-09:08:55:52:INFO] Model loaded successfully for worker : 37 [2020-10-09:08:55:52:INFO] Model loaded successfully for worker : 38 [2020-10-09:08:55:53:INFO] Model loaded successfully for worker : 39 [2020-10-09:08:55:53:INFO] Model loaded successfully for worker : 40 [2020-10-09:08:55:53:INFO] Sniff delimiter as ',' [2020-10-09:08:55:53:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:53:INFO] Sniff delimiter as ',' [2020-10-09:08:55:53:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:52:INFO] Model loaded successfully for worker : 38 [2020-10-09:08:55:53:INFO] Model loaded successfully for worker : 39 [2020-10-09:08:55:53:INFO] Model loaded successfully for worker : 40 [2020-10-09:08:55:53:INFO] Sniff delimiter as ',' [2020-10-09:08:55:53:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:53:INFO] Sniff delimiter as ',' [2020-10-09:08:55:53:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:53:INFO] Sniff delimiter as ',' [2020-10-09:08:55:53:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:53:INFO] Sniff delimiter as ',' [2020-10-09:08:55:53:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:54:INFO] Sniff delimiter as ',' [2020-10-09:08:55:54:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:54:INFO] Sniff delimiter as ',' [2020-10-09:08:55:54:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:55:INFO] Sniff delimiter as ',' [2020-10-09:08:55:55:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:55:INFO] Sniff delimiter as ',' [2020-10-09:08:55:55:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:55:INFO] Sniff delimiter as ',' [2020-10-09:08:55:55:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:55:INFO] Sniff delimiter as ',' [2020-10-09:08:55:55:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:57:INFO] Sniff delimiter as ',' [2020-10-09:08:55:57:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:57:INFO] Sniff delimiter as ',' [2020-10-09:08:55:57:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:57:INFO] Sniff delimiter as ',' [2020-10-09:08:55:57:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:57:INFO] Sniff delimiter as ',' [2020-10-09:08:55:57:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:57:INFO] Sniff delimiter as ',' [2020-10-09:08:55:57:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:57:INFO] Sniff delimiter as ',' [2020-10-09:08:55:57:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:57:INFO] Sniff delimiter as ',' [2020-10-09:08:55:57:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:55:57:INFO] Sniff delimiter as ',' [2020-10-09:08:55:57:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:01:INFO] Sniff delimiter as ',' [2020-10-09:08:56:01:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:01:INFO] Sniff delimiter as ',' [2020-10-09:08:56:01:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:01:INFO] Sniff delimiter as ',' [2020-10-09:08:56:01:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:01:INFO] Sniff delimiter as ',' [2020-10-09:08:56:01:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:02:INFO] Sniff delimiter as ',' [2020-10-09:08:56:02:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:02:INFO] Sniff delimiter as ',' [2020-10-09:08:56:02:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:02:INFO] Sniff delimiter as ',' [2020-10-09:08:56:02:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:02:INFO] Sniff delimiter as ',' [2020-10-09:08:56:02:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:04:INFO] Sniff delimiter as ',' [2020-10-09:08:56:04:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:04:INFO] Sniff delimiter as ',' [2020-10-09:08:56:04:INFO] Sniff delimiter as ',' [2020-10-09:08:56:04:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:04:INFO] Sniff delimiter as ',' [2020-10-09:08:56:04:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:04:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:04:INFO] Sniff delimiter as ',' [2020-10-09:08:56:04:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:04:INFO] Sniff delimiter as ',' [2020-10-09:08:56:04:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:04:INFO] Sniff delimiter as ',' [2020-10-09:08:56:04:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:04:INFO] Sniff delimiter as ',' [2020-10-09:08:56:04:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:06:INFO] Sniff delimiter as ',' [2020-10-09:08:56:06:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:06:INFO] Sniff delimiter as ',' [2020-10-09:08:56:06:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:06:INFO] Sniff delimiter as ',' [2020-10-09:08:56:06:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:07:INFO] Sniff delimiter as ',' [2020-10-09:08:56:07:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:06:INFO] Sniff delimiter as ',' [2020-10-09:08:56:06:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:06:INFO] Sniff delimiter as ',' [2020-10-09:08:56:06:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:06:INFO] Sniff delimiter as ',' [2020-10-09:08:56:06:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:07:INFO] Sniff delimiter as ',' [2020-10-09:08:56:07:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:09:INFO] Sniff delimiter as ',' [2020-10-09:08:56:09:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:09:INFO] Sniff delimiter as ',' [2020-10-09:08:56:09:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:09:INFO] Sniff delimiter as ',' [2020-10-09:08:56:09:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:09:INFO] Sniff delimiter as ',' [2020-10-09:08:56:09:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:09:INFO] Sniff delimiter as ',' [2020-10-09:08:56:09:INFO] Determined delimiter of CSV input is ',' [2020-10-09:08:56:09:INFO] Sniff delimiter as ',' [2020-10-09:08:56:09:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/367.3 KiB (3.8 MiB/s) with 1 file(s) remaining Completed 367.3 KiB/367.3 KiB (5.3 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-330335126841/xgboost-2020-10-09-08-50-30-236/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, max_features=5000,preprocessor= lambda x:x, tokenizer=lambda x:x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir,'new_data.csv'),header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir,'new_data.csv'),key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ...........................2020-10-09T09:03:19.222:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-10-09 09:03:19 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-10-09 09:03:19 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-10-09 09:03:19 +0000] [1] [INFO] Using worker: gevent [2020-10-09 09:03:19 +0000] [36] [INFO] Booting worker with pid: 36 Arguments: serve [2020-10-09 09:03:19 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-10-09 09:03:19 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-10-09 09:03:19 +0000] [1] [INFO] Using worker: gevent [2020-10-09 09:03:19 +0000] [36] [INFO] Booting worker with pid: 36 [2020-10-09 09:03:19 +0000] [37] [INFO] Booting worker with pid: 37 [2020-10-09:09:03:19:INFO] Model loaded successfully for worker : 36 [2020-10-09 09:03:19 +0000] [38] [INFO] Booting worker with pid: 38 [2020-10-09 09:03:19 +0000] [39] [INFO] Booting worker with pid: 39 [2020-10-09:09:03:19:INFO] Model loaded successfully for worker : 37 [2020-10-09:09:03:19:INFO] Model loaded successfully for worker : 38 [2020-10-09:09:03:19:INFO] Model loaded successfully for worker : 39 [2020-10-09:09:03:19:INFO] Sniff delimiter as ',' [2020-10-09:09:03:19:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:19:INFO] Sniff delimiter as ',' [2020-10-09:09:03:19:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:19:INFO] Sniff delimiter as ',' [2020-10-09:09:03:19:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:19:INFO] Sniff delimiter as ',' [2020-10-09:09:03:19:INFO] Determined delimiter of CSV input is ',' [2020-10-09 09:03:19 +0000] [37] [INFO] Booting worker with pid: 37 [2020-10-09:09:03:19:INFO] Model loaded successfully for worker : 36 [2020-10-09 09:03:19 +0000] [38] [INFO] Booting worker with pid: 38 [2020-10-09 09:03:19 +0000] [39] [INFO] Booting worker with pid: 39 [2020-10-09:09:03:19:INFO] Model loaded successfully for worker : 37 [2020-10-09:09:03:19:INFO] Model loaded successfully for worker : 38 [2020-10-09:09:03:19:INFO] Model loaded successfully for worker : 39 [2020-10-09:09:03:19:INFO] Sniff delimiter as ',' [2020-10-09:09:03:19:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:19:INFO] Sniff delimiter as ',' [2020-10-09:09:03:19:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:19:INFO] Sniff delimiter as ',' [2020-10-09:09:03:19:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:19:INFO] Sniff delimiter as ',' [2020-10-09:09:03:19:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:22:INFO] Sniff delimiter as ',' [2020-10-09:09:03:22:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:22:INFO] Sniff delimiter as ',' [2020-10-09:09:03:22:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:22:INFO] Sniff delimiter as ',' [2020-10-09:09:03:22:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:22:INFO] Sniff delimiter as ',' [2020-10-09:09:03:22:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:22:INFO] Sniff delimiter as ',' [2020-10-09:09:03:22:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:22:INFO] Sniff delimiter as ',' [2020-10-09:09:03:22:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:22:INFO] Sniff delimiter as ',' [2020-10-09:09:03:22:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:22:INFO] Sniff delimiter as ',' [2020-10-09:09:03:22:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:24:INFO] Sniff delimiter as ',' [2020-10-09:09:03:24:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:24:INFO] Sniff delimiter as ',' [2020-10-09:09:03:24:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:24:INFO] Sniff delimiter as ',' [2020-10-09:09:03:24:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:25:INFO] Sniff delimiter as ',' [2020-10-09:09:03:25:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:24:INFO] Sniff delimiter as ',' [2020-10-09:09:03:24:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:24:INFO] Sniff delimiter as ',' [2020-10-09:09:03:24:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:24:INFO] Sniff delimiter as ',' [2020-10-09:09:03:24:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:25:INFO] Sniff delimiter as ',' [2020-10-09:09:03:25:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:26:INFO] Sniff delimiter as ',' [2020-10-09:09:03:26:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:26:INFO] Sniff delimiter as ',' [2020-10-09:09:03:26:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:27:INFO] Sniff delimiter as ',' [2020-10-09:09:03:27:INFO] Sniff delimiter as ',' [2020-10-09:09:03:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:27:INFO] Sniff delimiter as ',' [2020-10-09:09:03:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:27:INFO] Sniff delimiter as ',' [2020-10-09:09:03:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:27:INFO] Sniff delimiter as ',' [2020-10-09:09:03:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:27:INFO] Sniff delimiter as ',' [2020-10-09:09:03:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:29:INFO] Sniff delimiter as ',' [2020-10-09:09:03:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:29:INFO] Sniff delimiter as ',' [2020-10-09:09:03:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:29:INFO] Sniff delimiter as ',' [2020-10-09:09:03:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:29:INFO] Sniff delimiter as ',' [2020-10-09:09:03:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:29:INFO] Sniff delimiter as ',' [2020-10-09:09:03:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:29:INFO] Sniff delimiter as ',' [2020-10-09:09:03:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:29:INFO] Sniff delimiter as ',' [2020-10-09:09:03:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:29:INFO] Sniff delimiter as ',' [2020-10-09:09:03:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:34:INFO] Sniff delimiter as ',' [2020-10-09:09:03:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:34:INFO] Sniff delimiter as ',' [2020-10-09:09:03:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:34:INFO] Sniff delimiter as ',' [2020-10-09:09:03:34:INFO] Sniff delimiter as ',' [2020-10-09:09:03:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:34:INFO] Sniff delimiter as ',' [2020-10-09:09:03:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:34:INFO] Sniff delimiter as ',' [2020-10-09:09:03:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:34:INFO] Sniff delimiter as ',' [2020-10-09:09:03:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:34:INFO] Sniff delimiter as ',' [2020-10-09:09:03:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:36:INFO] Sniff delimiter as ',' [2020-10-09:09:03:36:INFO] Sniff delimiter as ',' [2020-10-09:09:03:36:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:36:INFO] Sniff delimiter as ',' [2020-10-09:09:03:36:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:36:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:36:INFO] Sniff delimiter as ',' [2020-10-09:09:03:36:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:37:INFO] Sniff delimiter as ',' [2020-10-09:09:03:37:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:37:INFO] Sniff delimiter as ',' [2020-10-09:09:03:37:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:37:INFO] Sniff delimiter as ',' [2020-10-09:09:03:37:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:37:INFO] Sniff delimiter as ',' [2020-10-09:09:03:37:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:39:INFO] Sniff delimiter as ',' [2020-10-09:09:03:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:39:INFO] Sniff delimiter as ',' [2020-10-09:09:03:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:39:INFO] Sniff delimiter as ',' [2020-10-09:09:03:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:39:INFO] Sniff delimiter as ',' [2020-10-09:09:03:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:39:INFO] Sniff delimiter as ',' [2020-10-09:09:03:39:INFO] Sniff delimiter as ',' [2020-10-09:09:03:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:39:INFO] Sniff delimiter as ',' [2020-10-09:09:03:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:39:INFO] Sniff delimiter as ',' [2020-10-09:09:03:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:09:03:39:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/367.5 KiB (3.4 MiB/s) with 1 file(s) remaining Completed 367.5 KiB/367.5 KiB (4.6 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-330335126841/xgboost-2020-10-09-08-58-59-418/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2020-10-09-08-45-16-753 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) #new_XV being the document-term-matrix ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['talk', 'blast', 'open', 'trampa', 'infern', 'coolest', 'open', 'credit', 'ever', 'guid', 'music', 'tone', 'perhap', 'slightli', 'inspir', 'legendari', 'friday', '13th', 'theme', 'tsh', 'tsh', 'tsh', 'ha', 'ha', 'ha', 'name', 'lead', 'player', 'appear', 'screen', 'split', 'giant', 'syllabl', 'promis', 'intro', 'total', 'obscur', 'mexican', 'slasher', 'backwood', 'surviv', 'thriller', 'becom', 'cooler', 'everi', 'minut', 'pass', 'two', 'extrem', 'competit', 'testosteron', 'overload', 'paintbal', 'enemi', 'challeng', 'ultim', 'showdown', 'sleazi', 'bar', 'accord', 'newspap', 'articl', 'savag', 'bear', 'loos', 'nearbi', 'wood', 'alreadi', 'kill', 'multipl', 'hunter', 'tri', 'catch', 'challeng', 'includ', 'whoever', 'kill', 'bear', 'declar', 'ultim', 'macho', 'hero', 'biggest', 'set', 'ball', 'upon', 'arriv', 'howev', 'quickli', 'becom', 'obviou', 'bear', 'bewild', 'utterli', 'maniac', 'war', 'veteran', 'quit', 'arsen', 'weapon', 'hideout', 'numer', 'combat', 'trick', 'sleev', 'whole', 'decad', 'tame', 'deriv', 'american', 'slasher', 'earli', '90', 'mexican', 'effort', 'look', 'feel', 'refresh', 'vivid', 'formula', 'simplist', 'effici', 'lead', 'charact', 'plausibl', 'enough', 'build', 'toward', 'confront', 'sadist', 'killer', 'reason', 'suspens', 'maniac', 'must', 'fan', 'freddi', 'krueger', 'michael', 'myer', 'also', 'use', 'self', 'made', 'glove', 'sharp', 'knive', 'attach', 'white', 'mask', 'cover', 'face', 'murder', 'pleasingli', 'nasti', 'barbar', 'realli', 'hope', 'sinc', 'awesom', 'aforement', 'open', 'sequenc', 'wast', 'whole', 'lot', 'gratuit', 'blood', 'forestri', 'set', 'particularli', 'camouflag', 'boobi', 'trap', 'joyous', 'spectacular', 'trampa', 'intern', 'mexican', 'slasher', 'surviv', 'sleeper', 'hit', 'come', 'warmli', 'recommend', 'fan', 'genr'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:507: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'21st', 'reincarn', 'weari', 'spill', 'ghetto', 'playboy', 'victorian'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'optimist', 'banana', 'dubiou', 'masterson', 'sophi', 'omin', 'orchestr'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir,'new_data.csv'),key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir,'new_validation.csv'),key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir,'new_train.csv'),key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location,content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location,content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train':s3_new_input_train,'validation':s3_new_input_validation}) ###Output 2020-10-09 09:51:08 Starting - Starting the training job... 2020-10-09 09:51:10 Starting - Launching requested ML instances... 2020-10-09 09:52:07 Starting - Preparing the instances for training...... 2020-10-09 09:53:01 Downloading - Downloading input data... 2020-10-09 09:53:32 Training - Downloading the training image..Arguments: train [2020-10-09:09:53:52:INFO] Running standalone xgboost training. [2020-10-09:09:53:52:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8477.11mb [2020-10-09:09:53:52:INFO] Determined delimiter of CSV input is ',' [09:53:52] S3DistributionType set as FullyReplicated [09:53:54] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-10-09:09:53:54:INFO] Determined delimiter of CSV input is ',' [09:53:54] S3DistributionType set as FullyReplicated [09:53:55] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, 2020-10-09 09:53:51 Training - Training image download completed. Training in progress.[09:53:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [0]#011train-error:0.302133#011validation-error:0.3054 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [09:53:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 10 pruned nodes, max_depth=5 [1]#011train-error:0.297#011validation-error:0.2953 [09:54:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.287867#011validation-error:0.2899 [09:54:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 14 pruned nodes, max_depth=5 [3]#011train-error:0.285667#011validation-error:0.283 [09:54:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.2726#011validation-error:0.2723 [09:54:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [5]#011train-error:0.269133#011validation-error:0.2693 [09:54:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.2602#011validation-error:0.2603 [09:54:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [7]#011train-error:0.253133#011validation-error:0.2537 [09:54:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 16 pruned nodes, max_depth=5 [8]#011train-error:0.253133#011validation-error:0.2525 [09:54:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.244667#011validation-error:0.2474 [09:54:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [10]#011train-error:0.240267#011validation-error:0.2437 [09:54:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [11]#011train-error:0.233467#011validation-error:0.2377 [09:54:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [12]#011train-error:0.230467#011validation-error:0.2353 [09:54:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [13]#011train-error:0.228133#011validation-error:0.2338 [09:54:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [14]#011train-error:0.225#011validation-error:0.2311 [09:54:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.2206#011validation-error:0.2267 [09:54:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.219667#011validation-error:0.2245 [09:54:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [17]#011train-error:0.211867#011validation-error:0.2194 [09:54:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 2 pruned nodes, max_depth=5 [18]#011train-error:0.206133#011validation-error:0.2146 [09:54:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [19]#011train-error:0.202933#011validation-error:0.211 [09:54:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [20]#011train-error:0.200133#011validation-error:0.2103 [09:54:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.196733#011validation-error:0.2057 [09:54:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [22]#011train-error:0.1934#011validation-error:0.2062 [09:54:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.190667#011validation-error:0.2036 [09:54:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [24]#011train-error:0.190533#011validation-error:0.2026 [09:54:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.190067#011validation-error:0.2019 [09:54:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [26]#011train-error:0.187467#011validation-error:0.1987 [09:54:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [27]#011train-error:0.184733#011validation-error:0.1971 [09:54:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.1834#011validation-error:0.1952 [09:54:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.181867#011validation-error:0.1927 [09:54:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [30]#011train-error:0.178933#011validation-error:0.1919 [09:54:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.1764#011validation-error:0.1904 [09:54:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [32]#011train-error:0.1748#011validation-error:0.1894 [09:54:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [33]#011train-error:0.1726#011validation-error:0.1897 [09:54:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.172467#011validation-error:0.1896 [09:54:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [35]#011train-error:0.171133#011validation-error:0.188 [09:54:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.1704#011validation-error:0.1882 [09:54:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [37]#011train-error:0.167733#011validation-error:0.1882 [09:54:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [38]#011train-error:0.1664#011validation-error:0.1865 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count=1,instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2020-10-09-09-51-08-571 ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(data=new_data_location,content_type='text/csv',split_type='Line') new_xgb_transformer.wait() ###Output ......................2020-10-09T10:06:26.763:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-10-09 10:06:26 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-10-09 10:06:26 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-10-09 10:06:26 +0000] [1] [INFO] Using worker: gevent [2020-10-09 10:06:26 +0000] [37] [INFO] Booting worker with pid: 37 [2020-10-09 10:06:26 +0000] [38] [INFO] Booting worker with pid: 38 [2020-10-09:10:06:26:INFO] Model loaded successfully for worker : 37 [2020-10-09 10:06:26 +0000] [39] [INFO] Booting worker with pid: 39 [2020-10-09 10:06:26 +0000] [40] [INFO] Booting worker with pid: 40 [2020-10-09:10:06:26:INFO] Model loaded successfully for worker : 38 [2020-10-09:10:06:26:INFO] Model loaded successfully for worker : 39 [2020-10-09:10:06:26:INFO] Model loaded successfully for worker : 40 [2020-10-09:10:06:27:INFO] Sniff delimiter as ',' [2020-10-09:10:06:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:27:INFO] Sniff delimiter as ',' Arguments: serve [2020-10-09 10:06:26 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-10-09 10:06:26 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-10-09 10:06:26 +0000] [1] [INFO] Using worker: gevent [2020-10-09 10:06:26 +0000] [37] [INFO] Booting worker with pid: 37 [2020-10-09 10:06:26 +0000] [38] [INFO] Booting worker with pid: 38 [2020-10-09:10:06:26:INFO] Model loaded successfully for worker : 37 [2020-10-09 10:06:26 +0000] [39] [INFO] Booting worker with pid: 39 [2020-10-09 10:06:26 +0000] [40] [INFO] Booting worker with pid: 40 [2020-10-09:10:06:26:INFO] Model loaded successfully for worker : 38 [2020-10-09:10:06:26:INFO] Model loaded successfully for worker : 39 [2020-10-09:10:06:26:INFO] Model loaded successfully for worker : 40 [2020-10-09:10:06:27:INFO] Sniff delimiter as ',' [2020-10-09:10:06:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:27:INFO] Sniff delimiter as ',' [2020-10-09:10:06:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:27:INFO] Sniff delimiter as ',' [2020-10-09:10:06:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:27:INFO] Sniff delimiter as ',' [2020-10-09:10:06:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:27:INFO] Sniff delimiter as ',' [2020-10-09:10:06:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:27:INFO] Sniff delimiter as ',' [2020-10-09:10:06:27:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:29:INFO] Sniff delimiter as ',' [2020-10-09:10:06:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:29:INFO] Sniff delimiter as ',' [2020-10-09:10:06:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:29:INFO] Sniff delimiter as ',' [2020-10-09:10:06:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:29:INFO] Sniff delimiter as ',' [2020-10-09:10:06:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:29:INFO] Sniff delimiter as ',' [2020-10-09:10:06:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:29:INFO] Sniff delimiter as ',' [2020-10-09:10:06:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:29:INFO] Sniff delimiter as ',' [2020-10-09:10:06:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:29:INFO] Sniff delimiter as ',' [2020-10-09:10:06:29:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:32:INFO] Sniff delimiter as ',' [2020-10-09:10:06:32:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:32:INFO] Sniff delimiter as ',' [2020-10-09:10:06:32:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:32:INFO] Sniff delimiter as ',' [2020-10-09:10:06:32:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:32:INFO] Sniff delimiter as ',' [2020-10-09:10:06:32:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:32:INFO] Sniff delimiter as ',' [2020-10-09:10:06:32:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:32:INFO] Sniff delimiter as ',' [2020-10-09:10:06:32:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:32:INFO] Sniff delimiter as ',' [2020-10-09:10:06:32:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:32:INFO] Sniff delimiter as ',' [2020-10-09:10:06:32:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:34:INFO] Sniff delimiter as ',' [2020-10-09:10:06:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:34:INFO] Sniff delimiter as ',' [2020-10-09:10:06:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:34:INFO] Sniff delimiter as ',' [2020-10-09:10:06:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:34:INFO] Sniff delimiter as ',' [2020-10-09:10:06:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:34:INFO] Sniff delimiter as ',' [2020-10-09:10:06:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:34:INFO] Sniff delimiter as ',' [2020-10-09:10:06:34:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:36:INFO] Sniff delimiter as ',' [2020-10-09:10:06:36:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:37:INFO] Sniff delimiter as ',' [2020-10-09:10:06:37:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:37:INFO] Sniff delimiter as ',' [2020-10-09:10:06:37:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:37:INFO] Sniff delimiter as ',' [2020-10-09:10:06:37:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:36:INFO] Sniff delimiter as ',' [2020-10-09:10:06:36:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:37:INFO] Sniff delimiter as ',' [2020-10-09:10:06:37:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:37:INFO] Sniff delimiter as ',' [2020-10-09:10:06:37:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:37:INFO] Sniff delimiter as ',' [2020-10-09:10:06:37:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:39:INFO] Sniff delimiter as ',' [2020-10-09:10:06:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:39:INFO] Sniff delimiter as ',' [2020-10-09:10:06:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:39:INFO] Sniff delimiter as ',' [2020-10-09:10:06:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:39:INFO] Sniff delimiter as ',' [2020-10-09:10:06:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:39:INFO] Sniff delimiter as ',' [2020-10-09:10:06:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:39:INFO] Sniff delimiter as ',' [2020-10-09:10:06:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:39:INFO] Sniff delimiter as ',' [2020-10-09:10:06:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:39:INFO] Sniff delimiter as ',' [2020-10-09:10:06:39:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:41:INFO] Sniff delimiter as ',' [2020-10-09:10:06:41:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:41:INFO] Sniff delimiter as ',' [2020-10-09:10:06:41:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:42:INFO] Sniff delimiter as ',' [2020-10-09:10:06:42:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:42:INFO] Sniff delimiter as ',' [2020-10-09:10:06:42:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:41:INFO] Sniff delimiter as ',' [2020-10-09:10:06:41:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:41:INFO] Sniff delimiter as ',' [2020-10-09:10:06:41:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:42:INFO] Sniff delimiter as ',' [2020-10-09:10:06:42:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:42:INFO] Sniff delimiter as ',' [2020-10-09:10:06:42:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:44:INFO] Sniff delimiter as ',' [2020-10-09:10:06:44:INFO] Determined delimiter of CSV input is ',' [2020-10-09:10:06:44:INFO] Sniff delimiter as ',' [2020-10-09:10:06:44:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.3 KiB (4.3 MiB/s) with 1 file(s) remaining Completed 366.3 KiB/366.3 KiB (6.1 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-330335126841/xgboost-2020-10-09-10-01-38-565/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint,EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code import os data_dir = '../data' cache_dir = '../cache' # First we will remove all of the files contained in the data_dir directory !rm -r $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm -r $cache_dir/* !rmdir $cache_dir ###Output rm: cannot remove ‘../data/*’: No such file or directory rmdir: failed to remove ‘../data’: No such file or directory rmdir: failed to remove ‘../cache’: Directory not empty ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output INFO:sagemaker:Creating training-job with name: xgboost-2019-04-09-19-59-07-030 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output INFO:sagemaker:Creating model with name: xgboost-2019-04-06-14-15-36-072 ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output INFO:sagemaker:Creating transform job with name: xgboost-2019-04-06-14-22-50-709 ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ............................................! ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.1 KiB (3.2 MiB/s) with 1 file(s) remaining Completed 370.1 KiB/370.1 KiB (4.4 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-west-1-288115630859/xgboost-2019-04-06-14-22-50-709/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code print(', '.join(new_X[0])) # TODO: Create the CountVectorizer using the previously constructed vocabulary # need to make sure this does not apply any preprocessing or tokenizing vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x:x, tokenizer=lambda x:x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.fit_transform(new_X).toarray() type(new_XV) # if we don't use the toarray() funtion, it would simply be a sparese array ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, "new_data.csv"), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type="text/csv", split_type='Line') ###Output INFO:sagemaker:Creating transform job with name: xgboost-2019-04-06-14-44-41-996 ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code xgb_transformer.wait() !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-eu-west-1-288115630859/xgboost-2019-04-06-14-22-50-709/new_data.csv.out to ../data/sentiment_update/new_data.csv.out download: s3://sagemaker-eu-west-1-288115630859/xgboost-2019-04-06-14-22-50-709/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(1, "ml.m4.xlarge") ###Output INFO:sagemaker:Creating model with name: xgboost-2019-04-09-20-08-11-306 INFO:sagemaker:Creating endpoint with name xgboost-2019-04-09-19-59-07-030 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['knew', 'could', 'funni', 'christoph', 'meloni', 'janel', 'moloney', 'known', 'outstand', 'work', 'televis', 'hottest', 'drama', 'law', 'order', 'svu', 'west', 'wing', 'put', 'togeth', 'big', 'screen', 'get', 'engag', 'romant', 'comedi', 'plenti', 'laugh', 'actor', 'develop', 'stori', 'ongo', 'relationship', 'impress', 'skill', 'leav', 'audienc', 'bound', 'fall', 'love', 'barri', 'singer', 'meloni', 'despit', 'fact', 'standup', 'comic', 'also', 'happen', 'mean', 'spirit', 'sexist', 'jerk', 'root', 'even', 'take', 'insecur', 'opposit', 'sex', 'chase', 'thea', 'moloney', 'halfway', 'around', 'countri', 'hope', 'win', 'heart', 'littl', 'common', 'barri', 'final', 'open', 'heart', 'wonder', 'thea', 'keep', 'run', 'away', 'souler', 'opposit', 'wonder', 'movi', 'incred', 'cast', 'gift', 'writer', 'well', 'worth', 'time', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'reincarn', 'playboy', 'ghetto', 'victorian', '21st', 'weari', 'spill'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'orchestr', 'dubiou', 'sophi', 'masterson', 'banana', 'optimist', 'omin'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. Probably, the way people talk has changed, i.e. the words they use to describe these movies. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, "new_data.csv"), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, "new_validation.csv"), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, "new_train.csv"), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, role, 1, "ml.m4.xlarge", output_path="s3://{}/{}/output.".format(session.default_bucket, prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({"train": s3_input_train, "validation": s3_new_input_validation}) ###Output INFO:sagemaker:Creating training-job with name: xgboost-2019-04-09-20-19-31-373 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(1, "ml.m4.xlarge") ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type="Line") new_xgb_transformer.wait() ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "new_sentiment_model" + strftime(%Y-%M-%D-%H-%M-%S, gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemkaer_client.create_endpoint_config( Endpoint_Config_Name=new_xgb_endpoint_config_name, ProductionVariants=[{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-05-01 17:31:31-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 22.8MB/s in 4.6s 2020-05-01 17:31:36 (17.6 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-05-01 17:37:18 Starting - Starting the training job... 2020-05-01 17:37:21 Starting - Launching requested ML instances... 2020-05-01 17:38:16 Starting - Preparing the instances for training...... 2020-05-01 17:39:17 Downloading - Downloading input data... 2020-05-01 17:39:35 Training - Downloading the training image..Arguments: train [2020-05-01:17:39:56:INFO] Running standalone xgboost training. [2020-05-01:17:39:56:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8484.28mb [2020-05-01:17:39:56:INFO] Determined delimiter of CSV input is ',' [17:39:56] S3DistributionType set as FullyReplicated [17:39:58] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-05-01:17:39:58:INFO] Determined delimiter of CSV input is ',' [17:39:58] S3DistributionType set as FullyReplicated [17:39:59] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [17:40:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [0]#011train-error:0.2958#011validation-error:0.3088 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [17:40:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [1]#011train-error:0.276933#011validation-error:0.2886 [17:40:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.275267#011validation-error:0.2895 2020-05-01 17:39:55 Training - Training image download completed. Training in progress.[17:40:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [3]#011train-error:0.2708#011validation-error:0.2832 [17:40:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.265#011validation-error:0.2791 [17:40:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.241133#011validation-error:0.2595 [17:40:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.237867#011validation-error:0.256 [17:40:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [7]#011train-error:0.236267#011validation-error:0.2532 [17:40:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.234133#011validation-error:0.2549 [17:40:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [9]#011train-error:0.2258#011validation-error:0.2494 [17:40:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [10]#011train-error:0.220067#011validation-error:0.2436 [17:40:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [11]#011train-error:0.2132#011validation-error:0.2372 [17:40:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [12]#011train-error:0.209067#011validation-error:0.2331 [17:40:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [13]#011train-error:0.2016#011validation-error:0.2302 [17:40:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [14]#011train-error:0.199733#011validation-error:0.2248 [17:40:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [15]#011train-error:0.196267#011validation-error:0.2225 [17:40:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.191133#011validation-error:0.2202 [17:40:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [17]#011train-error:0.1886#011validation-error:0.2175 [17:40:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [18]#011train-error:0.1866#011validation-error:0.2151 [17:40:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [19]#011train-error:0.1858#011validation-error:0.213 [17:40:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [20]#011train-error:0.184467#011validation-error:0.2118 [17:40:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [21]#011train-error:0.1808#011validation-error:0.2106 [17:40:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [22]#011train-error:0.1792#011validation-error:0.209 [17:40:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.1764#011validation-error:0.2067 [17:40:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [24]#011train-error:0.1728#011validation-error:0.2057 [17:40:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [25]#011train-error:0.171333#011validation-error:0.203 [17:40:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [26]#011train-error:0.169067#011validation-error:0.2008 [17:40:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [27]#011train-error:0.169#011validation-error:0.2003 [17:40:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.165667#011validation-error:0.1978 [17:40:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.164067#011validation-error:0.197 [17:40:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.163333#011validation-error:0.1959 [17:40:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [31]#011train-error:0.162#011validation-error:0.195 [17:40:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.1604#011validation-error:0.1941 [17:40:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [33]#011train-error:0.159#011validation-error:0.1926 [17:40:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [34]#011train-error:0.157467#011validation-error:0.191 [17:40:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [35]#011train-error:0.156#011validation-error:0.1906 [17:40:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.155667#011validation-error:0.1908 [17:40:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [37]#011train-error:0.153267#011validation-error:0.1897 [17:40:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [38]#011train-error:0.152133#011validation-error:0.188 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ......................Arguments: serve [2020-05-01 17:49:08 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-01 17:49:08 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-01 17:49:08 +0000] [1] [INFO] Using worker: gevent [2020-05-01 17:49:08 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-01 17:49:08 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-01 17:49:08 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-01:17:49:08:INFO] Model loaded successfully for worker : 38 [2020-05-01:17:49:08:INFO] Model loaded successfully for worker : 39 [2020-05-01 17:49:08 +0000] [41] [INFO] Booting worker with pid: 41 [2020-05-01:17:49:08:INFO] Model loaded successfully for worker : 40 [2020-05-01:17:49:08:INFO] Model loaded successfully for worker : 41 [2020-05-01:17:49:30:INFO] Sniff delimiter as ',' [2020-05-01:17:49:30:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:30:INFO] Sniff delimiter as ',' [2020-05-01:17:49:30:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:30:INFO] Sniff delimiter as ',' [2020-05-01:17:49:30:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:30:INFO] Sniff delimiter as ',' [2020-05-01:17:49:30:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:31:INFO] Sniff delimiter as ',' [2020-05-01:17:49:31:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:31:INFO] Sniff delimiter as ',' [2020-05-01:17:49:31:INFO] Determined delimiter of CSV input is ',' 2020-05-01T17:49:27.700:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-01:17:49:32:INFO] Sniff delimiter as ',' [2020-05-01:17:49:32:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:32:INFO] Sniff delimiter as ',' [2020-05-01:17:49:32:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:33:INFO] Sniff delimiter as ',' [2020-05-01:17:49:33:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:33:INFO] Sniff delimiter as ',' [2020-05-01:17:49:33:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:33:INFO] Sniff delimiter as ',' [2020-05-01:17:49:33:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:33:INFO] Sniff delimiter as ',' [2020-05-01:17:49:33:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:33:INFO] Sniff delimiter as ',' [2020-05-01:17:49:33:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:33:INFO] Sniff delimiter as ',' [2020-05-01:17:49:33:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:35:INFO] Sniff delimiter as ',' [2020-05-01:17:49:35:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:35:INFO] Sniff delimiter as ',' [2020-05-01:17:49:35:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:35:INFO] Sniff delimiter as ',' [2020-05-01:17:49:35:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:35:INFO] Sniff delimiter as ',' [2020-05-01:17:49:35:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:36:INFO] Sniff delimiter as ',' [2020-05-01:17:49:36:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:36:INFO] Sniff delimiter as ',' [2020-05-01:17:49:36:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:36:INFO] Sniff delimiter as ',' [2020-05-01:17:49:36:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:36:INFO] Sniff delimiter as ',' [2020-05-01:17:49:36:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:36:INFO] Sniff delimiter as ',' [2020-05-01:17:49:36:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:36:INFO] Sniff delimiter as ',' [2020-05-01:17:49:36:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:36:INFO] Sniff delimiter as ',' [2020-05-01:17:49:36:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:36:INFO] Sniff delimiter as ',' [2020-05-01:17:49:36:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:38:INFO] Sniff delimiter as ',' [2020-05-01:17:49:38:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:39:INFO] Sniff delimiter as ',' [2020-05-01:17:49:39:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:38:INFO] Sniff delimiter as ',' [2020-05-01:17:49:38:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:39:INFO] Sniff delimiter as ',' [2020-05-01:17:49:39:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:39:INFO] Sniff delimiter as ',' [2020-05-01:17:49:39:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:39:INFO] Sniff delimiter as ',' [2020-05-01:17:49:39:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:39:INFO] Sniff delimiter as ',' [2020-05-01:17:49:39:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:39:INFO] Sniff delimiter as ',' [2020-05-01:17:49:39:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:41:INFO] Sniff delimiter as ',' [2020-05-01:17:49:41:INFO] Sniff delimiter as ',' [2020-05-01:17:49:41:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:41:INFO] Sniff delimiter as ',' [2020-05-01:17:49:41:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:41:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:41:INFO] Sniff delimiter as ',' [2020-05-01:17:49:41:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:43:INFO] Sniff delimiter as ',' [2020-05-01:17:49:43:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:44:INFO] Sniff delimiter as ',' [2020-05-01:17:49:43:INFO] Sniff delimiter as ',' [2020-05-01:17:49:43:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:44:INFO] Sniff delimiter as ',' [2020-05-01:17:49:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:44:INFO] Sniff delimiter as ',' [2020-05-01:17:49:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:44:INFO] Sniff delimiter as ',' [2020-05-01:17:49:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:44:INFO] Sniff delimiter as ',' [2020-05-01:17:49:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:44:INFO] Sniff delimiter as ',' [2020-05-01:17:49:44:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:46:INFO] Sniff delimiter as ',' [2020-05-01:17:49:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:46:INFO] Sniff delimiter as ',' [2020-05-01:17:49:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:46:INFO] Sniff delimiter as ',' [2020-05-01:17:49:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:46:INFO] Sniff delimiter as ',' [2020-05-01:17:49:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:46:INFO] Sniff delimiter as ',' [2020-05-01:17:49:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:46:INFO] Sniff delimiter as ',' [2020-05-01:17:49:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:46:INFO] Sniff delimiter as ',' [2020-05-01:17:49:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:46:INFO] Sniff delimiter as ',' [2020-05-01:17:49:46:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:48:INFO] Sniff delimiter as ',' [2020-05-01:17:49:48:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:48:INFO] Sniff delimiter as ',' [2020-05-01:17:49:48:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:48:INFO] Sniff delimiter as ',' [2020-05-01:17:49:48:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:48:INFO] Sniff delimiter as ',' [2020-05-01:17:49:48:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:48:INFO] Sniff delimiter as ',' [2020-05-01:17:49:48:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:48:INFO] Sniff delimiter as ',' [2020-05-01:17:49:48:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:49:INFO] Sniff delimiter as ',' [2020-05-01:17:49:49:INFO] Determined delimiter of CSV input is ',' [2020-05-01:17:49:49:INFO] Sniff delimiter as ',' [2020-05-01:17:49:49:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.3 KiB (4.2 MiB/s) with 1 file(s) remaining Completed 370.3 KiB/370.3 KiB (6.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-180564272071/xgboost-2020-05-01-17-45-33-947/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. ###Output _____no_output_____ ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None ###Output _____no_output_____ ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output _____no_output_____ ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output _____no_output_____ ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output _____no_output_____ ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-04-14 11:48:47-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 9.68MB/s in 12s 2020-04-14 11:48:59 (6.81 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-04-14 11:55:27 Starting - Starting the training job... 2020-04-14 11:55:28 Starting - Launching requested ML instances... 2020-04-14 11:56:27 Starting - Preparing the instances for training...... 2020-04-14 11:57:19 Downloading - Downloading input data... 2020-04-14 11:57:57 Training - Training image download completed. Training in progress..Arguments: train [2020-04-14:11:57:58:INFO] Running standalone xgboost training. [2020-04-14:11:57:58:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8508.56mb [2020-04-14:11:57:58:INFO] Determined delimiter of CSV input is ',' [11:57:58] S3DistributionType set as FullyReplicated [11:58:00] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-04-14:11:58:00:INFO] Determined delimiter of CSV input is ',' [11:58:00] S3DistributionType set as FullyReplicated [11:58:01] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [11:58:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [0]#011train-error:0.2972#011validation-error:0.302 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [11:58:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.283333#011validation-error:0.2874 [11:58:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.2776#011validation-error:0.2852 [11:58:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.261#011validation-error:0.2698 [11:58:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.258867#011validation-error:0.2687 [11:58:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [5]#011train-error:0.251267#011validation-error:0.2608 [11:58:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [6]#011train-error:0.245733#011validation-error:0.2551 [11:58:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.238867#011validation-error:0.2513 [11:58:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 0 pruned nodes, max_depth=5 [8]#011train-error:0.23#011validation-error:0.2451 [11:58:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [9]#011train-error:0.226333#011validation-error:0.2385 [11:58:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [10]#011train-error:0.2184#011validation-error:0.2327 [11:58:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [11]#011train-error:0.216333#011validation-error:0.2309 [11:58:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [12]#011train-error:0.2106#011validation-error:0.2257 [11:58:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.210067#011validation-error:0.2264 [11:58:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [14]#011train-error:0.2014#011validation-error:0.222 [11:58:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [15]#011train-error:0.197933#011validation-error:0.2191 [11:58:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [16]#011train-error:0.1982#011validation-error:0.2161 [11:58:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [17]#011train-error:0.193867#011validation-error:0.2144 [11:58:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.189933#011validation-error:0.2125 [11:58:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.189133#011validation-error:0.2108 [11:58:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [20]#011train-error:0.186333#011validation-error:0.208 [11:58:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.182867#011validation-error:0.2051 [11:58:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.178867#011validation-error:0.2044 [11:58:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [23]#011train-error:0.1776#011validation-error:0.2034 [11:58:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [24]#011train-error:0.174267#011validation-error:0.2008 [11:58:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [25]#011train-error:0.171733#011validation-error:0.1985 [11:58:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [26]#011train-error:0.170467#011validation-error:0.1949 [11:58:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [27]#011train-error:0.168333#011validation-error:0.1928 [11:58:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 18 pruned nodes, max_depth=5 [28]#011train-error:0.1678#011validation-error:0.193 [11:58:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [29]#011train-error:0.1648#011validation-error:0.1908 [11:58:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.163533#011validation-error:0.1897 [11:58:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.162333#011validation-error:0.1881 [11:58:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [32]#011train-error:0.160933#011validation-error:0.1874 [11:58:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [33]#011train-error:0.161067#011validation-error:0.1863 [11:58:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [34]#011train-error:0.159133#011validation-error:0.1842 [11:58:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.1578#011validation-error:0.1846 [11:58:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.154867#011validation-error:0.1833 [11:58:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [37]#011train-error:0.1548#011validation-error:0.1833 [11:58:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [38]#011train-error:0.153533#011validation-error:0.1818 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .....................Arguments: serve [2020-04-14 12:04:25 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-04-14 12:04:25 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-04-14 12:04:25 +0000] [1] [INFO] Using worker: gevent [2020-04-14 12:04:25 +0000] [38] [INFO] Booting worker with pid: 38 [2020-04-14 12:04:26 +0000] [39] [INFO] Booting worker with pid: 39 [2020-04-14 12:04:26 +0000] [40] [INFO] Booting worker with pid: 40 [2020-04-14:12:04:26:INFO] Model loaded successfully for worker : 38 [2020-04-14:12:04:26:INFO] Model loaded successfully for worker : 39 [2020-04-14 12:04:26 +0000] [41] [INFO] Booting worker with pid: 41 [2020-04-14:12:04:26:INFO] Model loaded successfully for worker : 40 [2020-04-14:12:04:26:INFO] Model loaded successfully for worker : 41 [2020-04-14:12:04:44:INFO] Sniff delimiter as ',' [2020-04-14:12:04:44:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:44:INFO] Sniff delimiter as ',' [2020-04-14:12:04:44:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:45:INFO] Sniff delimiter as ',' [2020-04-14:12:04:45:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:45:INFO] Sniff delimiter as ',' [2020-04-14:12:04:45:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:45:INFO] Sniff delimiter as ',' [2020-04-14:12:04:45:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:45:INFO] Sniff delimiter as ',' [2020-04-14:12:04:45:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:45:INFO] Sniff delimiter as ',' [2020-04-14:12:04:45:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:45:INFO] Sniff delimiter as ',' [2020-04-14:12:04:45:INFO] Determined delimiter of CSV input is ',' 2020-04-14T12:04:42.717:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-04-14:12:04:47:INFO] Sniff delimiter as ',' [2020-04-14:12:04:47:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:47:INFO] Sniff delimiter as ',' [2020-04-14:12:04:47:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:48:INFO] Sniff delimiter as ',' [2020-04-14:12:04:48:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:47:INFO] Sniff delimiter as ',' [2020-04-14:12:04:47:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:47:INFO] Sniff delimiter as ',' [2020-04-14:12:04:47:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:48:INFO] Sniff delimiter as ',' [2020-04-14:12:04:48:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:48:INFO] Sniff delimiter as ',' [2020-04-14:12:04:48:INFO] Sniff delimiter as ',' [2020-04-14:12:04:48:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:48:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:50:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:50:INFO] Sniff delimiter as ',' [2020-04-14:12:04:50:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:50:INFO] Sniff delimiter as ',' [2020-04-14:12:04:50:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:50:INFO] Sniff delimiter as ',' [2020-04-14:12:04:50:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:50:INFO] Sniff delimiter as ',' [2020-04-14:12:04:50:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:50:INFO] Sniff delimiter as ',' [2020-04-14:12:04:50:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:50:INFO] Sniff delimiter as ',' [2020-04-14:12:04:50:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:50:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:52:INFO] Sniff delimiter as ',' [2020-04-14:12:04:52:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:52:INFO] Sniff delimiter as ',' [2020-04-14:12:04:52:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:52:INFO] Sniff delimiter as ',' [2020-04-14:12:04:52:INFO] Sniff delimiter as ',' [2020-04-14:12:04:52:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:52:INFO] Sniff delimiter as ',' [2020-04-14:12:04:52:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:52:INFO] Sniff delimiter as ',' [2020-04-14:12:04:52:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:53:INFO] Sniff delimiter as ',' [2020-04-14:12:04:53:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:52:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:53:INFO] Sniff delimiter as ',' [2020-04-14:12:04:53:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:55:INFO] Sniff delimiter as ',' [2020-04-14:12:04:55:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:55:INFO] Sniff delimiter as ',' [2020-04-14:12:04:55:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:55:INFO] Sniff delimiter as ',' [2020-04-14:12:04:55:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:55:INFO] Sniff delimiter as ',' [2020-04-14:12:04:55:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:55:INFO] Sniff delimiter as ',' [2020-04-14:12:04:55:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:55:INFO] Sniff delimiter as ',' [2020-04-14:12:04:55:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:55:INFO] Sniff delimiter as ',' [2020-04-14:12:04:55:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:55:INFO] Sniff delimiter as ',' [2020-04-14:12:04:55:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:57:INFO] Sniff delimiter as ',' [2020-04-14:12:04:57:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:57:INFO] Sniff delimiter as ',' [2020-04-14:12:04:57:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:57:INFO] Sniff delimiter as ',' [2020-04-14:12:04:57:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:57:INFO] Sniff delimiter as ',' [2020-04-14:12:04:57:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:57:INFO] Sniff delimiter as ',' [2020-04-14:12:04:57:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:57:INFO] Sniff delimiter as ',' [2020-04-14:12:04:57:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:57:INFO] Sniff delimiter as ',' [2020-04-14:12:04:57:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:57:INFO] Sniff delimiter as ',' [2020-04-14:12:04:57:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:59:INFO] Sniff delimiter as ',' [2020-04-14:12:04:59:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:04:59:INFO] Sniff delimiter as ',' [2020-04-14:12:04:59:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:00:INFO] Sniff delimiter as ',' [2020-04-14:12:05:00:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:00:INFO] Sniff delimiter as ',' [2020-04-14:12:05:00:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:00:INFO] Sniff delimiter as ',' [2020-04-14:12:05:00:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:00:INFO] Sniff delimiter as ',' [2020-04-14:12:05:00:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:02:INFO] Sniff delimiter as ',' [2020-04-14:12:05:02:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:02:INFO] Sniff delimiter as ',' [2020-04-14:12:05:02:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:02:INFO] Sniff delimiter as ',' [2020-04-14:12:05:02:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:02:INFO] Sniff delimiter as ',' [2020-04-14:12:05:02:INFO] Sniff delimiter as ',' [2020-04-14:12:05:02:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:02:INFO] Sniff delimiter as ',' [2020-04-14:12:05:02:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:02:INFO] Sniff delimiter as ',' [2020-04-14:12:05:02:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:02:INFO] Sniff delimiter as ',' [2020-04-14:12:05:02:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:02:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:04:INFO] Sniff delimiter as ',' [2020-04-14:12:05:04:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:05:INFO] Sniff delimiter as ',' [2020-04-14:12:05:05:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:04:INFO] Sniff delimiter as ',' [2020-04-14:12:05:04:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:05:INFO] Sniff delimiter as ',' [2020-04-14:12:05:05:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:05:INFO] Sniff delimiter as ',' [2020-04-14:12:05:05:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:05:INFO] Sniff delimiter as ',' [2020-04-14:12:05:05:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:05:INFO] Sniff delimiter as ',' [2020-04-14:12:05:05:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:05:INFO] Sniff delimiter as ',' [2020-04-14:12:05:05:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:07:INFO] Sniff delimiter as ',' [2020-04-14:12:05:07:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:07:INFO] Sniff delimiter as ',' [2020-04-14:12:05:07:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:07:INFO] Sniff delimiter as ',' [2020-04-14:12:05:07:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:07:INFO] Sniff delimiter as ',' [2020-04-14:12:05:07:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:07:INFO] Sniff delimiter as ',' [2020-04-14:12:05:07:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:07:INFO] Sniff delimiter as ',' [2020-04-14:12:05:07:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:07:INFO] Sniff delimiter as ',' [2020-04-14:12:05:07:INFO] Determined delimiter of CSV input is ',' [2020-04-14:12:05:07:INFO] Sniff delimiter as ',' [2020-04-14:12:05:07:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.2 KiB (2.9 MiB/s) with 1 file(s) remaining Completed 369.2 KiB/369.2 KiB (4.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-west-1-515519275483/xgboost-2020-04-14-12-01-11-395/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ........................Arguments: serve Arguments: serve [2020-04-14 13:09:24 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-04-14 13:09:24 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-04-14 13:09:24 +0000] [1] [INFO] Using worker: gevent [2020-04-14 13:09:24 +0000] [38] [INFO] Booting worker with pid: 38 [2020-04-14 13:09:25 +0000] [39] [INFO] Booting worker with pid: 39 [2020-04-14 13:09:25 +0000] [40] [INFO] Booting worker with pid: 40 [2020-04-14:13:09:25:INFO] Model loaded successfully for worker : 38 [2020-04-14 13:09:25 +0000] [41] [INFO] Booting worker with pid: 41 [2020-04-14:13:09:25:INFO] Model loaded successfully for worker : 39 [2020-04-14:13:09:25:INFO] Model loaded successfully for worker : 40 [2020-04-14:13:09:25:INFO] Model loaded successfully for worker : 41 [2020-04-14 13:09:24 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-04-14 13:09:24 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-04-14 13:09:24 +0000] [1] [INFO] Using worker: gevent [2020-04-14 13:09:24 +0000] [38] [INFO] Booting worker with pid: 38 [2020-04-14 13:09:25 +0000] [39] [INFO] Booting worker with pid: 39 [2020-04-14 13:09:25 +0000] [40] [INFO] Booting worker with pid: 40 [2020-04-14:13:09:25:INFO] Model loaded successfully for worker : 38 [2020-04-14 13:09:25 +0000] [41] [INFO] Booting worker with pid: 41 [2020-04-14:13:09:25:INFO] Model loaded successfully for worker : 39 [2020-04-14:13:09:25:INFO] Model loaded successfully for worker : 40 [2020-04-14:13:09:25:INFO] Model loaded successfully for worker : 41 2020-04-14T13:09:29.397:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-04-14:13:09:32:INFO] Sniff delimiter as ',' [2020-04-14:13:09:32:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:32:INFO] Sniff delimiter as ',' [2020-04-14:13:09:32:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:32:INFO] Sniff delimiter as ',' [2020-04-14:13:09:32:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:32:INFO] Sniff delimiter as ',' [2020-04-14:13:09:32:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:32:INFO] Sniff delimiter as ',' [2020-04-14:13:09:32:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:32:INFO] Sniff delimiter as ',' [2020-04-14:13:09:32:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:32:INFO] Sniff delimiter as ',' [2020-04-14:13:09:32:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:32:INFO] Sniff delimiter as ',' [2020-04-14:13:09:32:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:34:INFO] Sniff delimiter as ',' [2020-04-14:13:09:34:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:34:INFO] Sniff delimiter as ',' [2020-04-14:13:09:34:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:34:INFO] Sniff delimiter as ',' [2020-04-14:13:09:34:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:34:INFO] Sniff delimiter as ',' [2020-04-14:13:09:34:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:34:INFO] Sniff delimiter as ',' [2020-04-14:13:09:34:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:34:INFO] Sniff delimiter as ',' [2020-04-14:13:09:34:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:35:INFO] Sniff delimiter as ',' [2020-04-14:13:09:35:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:35:INFO] Sniff delimiter as ',' [2020-04-14:13:09:35:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:37:INFO] Sniff delimiter as ',' [2020-04-14:13:09:37:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:37:INFO] Sniff delimiter as ',' [2020-04-14:13:09:37:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:37:INFO] Sniff delimiter as ',' [2020-04-14:13:09:37:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:37:INFO] Sniff delimiter as ',' [2020-04-14:13:09:37:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:37:INFO] Sniff delimiter as ',' [2020-04-14:13:09:37:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:37:INFO] Sniff delimiter as ',' [2020-04-14:13:09:37:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:39:INFO] Sniff delimiter as ',' [2020-04-14:13:09:39:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:39:INFO] Sniff delimiter as ',' [2020-04-14:13:09:39:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:39:INFO] Sniff delimiter as ',' [2020-04-14:13:09:39:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:39:INFO] Sniff delimiter as ',' [2020-04-14:13:09:39:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:39:INFO] Sniff delimiter as ',' [2020-04-14:13:09:39:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:39:INFO] Sniff delimiter as ',' [2020-04-14:13:09:39:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:39:INFO] Sniff delimiter as ',' [2020-04-14:13:09:39:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:39:INFO] Sniff delimiter as ',' [2020-04-14:13:09:39:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:41:INFO] Sniff delimiter as ',' [2020-04-14:13:09:41:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:41:INFO] Sniff delimiter as ',' [2020-04-14:13:09:41:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:42:INFO] Sniff delimiter as ',' [2020-04-14:13:09:42:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:42:INFO] Sniff delimiter as ',' [2020-04-14:13:09:42:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:41:INFO] Sniff delimiter as ',' [2020-04-14:13:09:41:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:41:INFO] Sniff delimiter as ',' [2020-04-14:13:09:41:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:42:INFO] Sniff delimiter as ',' [2020-04-14:13:09:42:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:42:INFO] Sniff delimiter as ',' [2020-04-14:13:09:42:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:44:INFO] Sniff delimiter as ',' [2020-04-14:13:09:44:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:44:INFO] Sniff delimiter as ',' [2020-04-14:13:09:44:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:44:INFO] Sniff delimiter as ',' [2020-04-14:13:09:44:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:44:INFO] Sniff delimiter as ',' [2020-04-14:13:09:44:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:44:INFO] Sniff delimiter as ',' [2020-04-14:13:09:44:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:44:INFO] Sniff delimiter as ',' [2020-04-14:13:09:44:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:44:INFO] Sniff delimiter as ',' [2020-04-14:13:09:44:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:44:INFO] Sniff delimiter as ',' [2020-04-14:13:09:44:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:46:INFO] Sniff delimiter as ',' [2020-04-14:13:09:46:INFO] Determined delimiter of CSV input is ',' [2020-04-14:13:09:46:INFO] Sniff delimiter as ',' [2020-04-14:13:09:46:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.5 KiB (2.7 MiB/s) with 1 file(s) remaining Completed 369.5 KiB/369.5 KiB (3.8 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-west-1-515519275483/xgboost-2020-04-14-13-05-38-018/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-04-14-11-55-27-602 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['rememb', 'movi', '13', 'seem', 'lot', 'review', 'say', 'thing', 'age', '13', 'group', 'school', 'buddi', 'want', 'see', 'billi', 'crystal', 'first', 'movi', 'fell', 'typic', 'commerci', 'ad', 'tell', 'us', 'great', 'comedi', 'suffer', '45', 'minut', 'agre', 'leav', 'theater', 'grotesqu', 'tasteless', 'far', 'cri', 'abil', 'billi', 'crystal', 'make', 'us', 'laugh', 'laugh', 'stumbl', 'upon', 'review', 'accid', 'decid', 'regist', 'tell', 'rest', 'world', 'rot', 'gut', 'wast', 'film', 'rent', 'deserv', 'get', 'warn', 'banana'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'victorian', 'ghetto', 'reincarn', 'spill', 'playboy', '21st', 'weari'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'banana', 'masterson', 'orchestr', 'sophi', 'omin', 'optimist', 'dubiou'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. ###Code import numpy as np wordlist = new_vocabulary - original_vocabulary wordfreq = [0] * len(wordlist) print(wordfreq) for i in range(100): wordfreq = np.add(wordfreq, [list(next(gn))[0].count(w) for w in wordlist]) # a list comprehension print(wordfreq) ###Output [0, 0, 0, 0, 0, 0, 0] [46 0 0 0 0 1 0] ###Markdown (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-04-14 14:00:16 Starting - Starting the training job... 2020-04-14 14:00:17 Starting - Launching requested ML instances...... 2020-04-14 14:01:17 Starting - Preparing the instances for training... 2020-04-14 14:02:13 Downloading - Downloading input data... 2020-04-14 14:02:46 Training - Downloading the training image... 2020-04-14 14:03:06 Training - Training image download completed. Training in progress.Arguments: train [2020-04-14:14:03:07:INFO] Running standalone xgboost training. [2020-04-14:14:03:07:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8502.03mb [2020-04-14:14:03:07:INFO] Determined delimiter of CSV input is ',' [14:03:07] S3DistributionType set as FullyReplicated [14:03:08] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-04-14:14:03:08:INFO] Determined delimiter of CSV input is ',' [14:03:08] S3DistributionType set as FullyReplicated [14:03:09] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [14:03:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 2 pruned nodes, max_depth=5 [0]#011train-error:0.2992#011validation-error:0.3025 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [14:03:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [1]#011train-error:0.296533#011validation-error:0.3001 [14:03:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.280467#011validation-error:0.2889 [14:03:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [3]#011train-error:0.275333#011validation-error:0.2799 [14:03:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.271267#011validation-error:0.2785 [14:03:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5 [5]#011train-error:0.2612#011validation-error:0.2686 [14:03:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [6]#011train-error:0.255067#011validation-error:0.2637 [14:03:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [7]#011train-error:0.252733#011validation-error:0.2636 [14:03:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.246267#011validation-error:0.2557 [14:03:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 2 pruned nodes, max_depth=5 [9]#011train-error:0.2404#011validation-error:0.2499 [14:03:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.236533#011validation-error:0.2464 [14:03:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [11]#011train-error:0.232067#011validation-error:0.2439 [14:03:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [12]#011train-error:0.2308#011validation-error:0.2439 [14:03:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [13]#011train-error:0.226067#011validation-error:0.2399 [14:03:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [14]#011train-error:0.222067#011validation-error:0.2394 [14:03:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [15]#011train-error:0.2162#011validation-error:0.235 [14:03:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.211667#011validation-error:0.2326 [14:03:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [17]#011train-error:0.2102#011validation-error:0.2328 [14:03:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [18]#011train-error:0.2062#011validation-error:0.2303 [14:03:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [19]#011train-error:0.2038#011validation-error:0.2292 [14:03:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [20]#011train-error:0.202133#011validation-error:0.2256 [14:03:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.199#011validation-error:0.2241 [14:03:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [22]#011train-error:0.1944#011validation-error:0.2194 [14:03:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.192667#011validation-error:0.218 [14:03:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.1908#011validation-error:0.217 [14:03:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.189#011validation-error:0.2142 [14:03:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [26]#011train-error:0.186733#011validation-error:0.2132 [14:03:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [27]#011train-error:0.184867#011validation-error:0.2108 [14:03:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [28]#011train-error:0.183267#011validation-error:0.2068 [14:03:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [29]#011train-error:0.1788#011validation-error:0.2044 [14:03:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [30]#011train-error:0.177133#011validation-error:0.2022 [14:03:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [31]#011train-error:0.175#011validation-error:0.2017 [14:03:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.174133#011validation-error:0.2 [14:03:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [33]#011train-error:0.173467#011validation-error:0.2008 [14:03:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [34]#011train-error:0.174133#011validation-error:0.2 [14:03:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [35]#011train-error:0.1716#011validation-error:0.1986 [14:03:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.169667#011validation-error:0.1971 [14:04:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [37]#011train-error:0.1664#011validation-error:0.1973 [14:04:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5 [38]#011train-error:0.165467#011validation-error:0.1969 [14:04:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.163133#011validation-error:0.1966 [14:04:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [40]#011train-error:0.162733#011validation-error:0.1948 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ......................Arguments: serve [2020-04-14 14:09:17 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-04-14 14:09:17 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-04-14 14:09:17 +0000] [1] [INFO] Using worker: gevent [2020-04-14 14:09:17 +0000] [38] [INFO] Booting worker with pid: 38 [2020-04-14 14:09:17 +0000] [39] [INFO] Booting worker with pid: 39 [2020-04-14 14:09:17 +0000] [40] [INFO] Booting worker with pid: 40 [2020-04-14 14:09:18 +0000] [41] [INFO] Booting worker with pid: 41 [2020-04-14:14:09:18:INFO] Model loaded successfully for worker : 38 [2020-04-14:14:09:18:INFO] Model loaded successfully for worker : 39 [2020-04-14:14:09:18:INFO] Model loaded successfully for worker : 41 [2020-04-14:14:09:18:INFO] Model loaded successfully for worker : 40 2020-04-14T14:09:37.716:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-04-14:14:09:40:INFO] Sniff delimiter as ',' [2020-04-14:14:09:40:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:40:INFO] Sniff delimiter as ',' [2020-04-14:14:09:40:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:41:INFO] Sniff delimiter as ',' [2020-04-14:14:09:41:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:41:INFO] Sniff delimiter as ',' [2020-04-14:14:09:41:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:41:INFO] Sniff delimiter as ',' [2020-04-14:14:09:41:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:41:INFO] Sniff delimiter as ',' [2020-04-14:14:09:41:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:41:INFO] Sniff delimiter as ',' [2020-04-14:14:09:41:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:41:INFO] Sniff delimiter as ',' [2020-04-14:14:09:41:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:43:INFO] Sniff delimiter as ',' [2020-04-14:14:09:43:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:43:INFO] Sniff delimiter as ',' [2020-04-14:14:09:43:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:43:INFO] Sniff delimiter as ',' [2020-04-14:14:09:43:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:44:INFO] Sniff delimiter as ',' [2020-04-14:14:09:44:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:43:INFO] Sniff delimiter as ',' [2020-04-14:14:09:43:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:43:INFO] Sniff delimiter as ',' [2020-04-14:14:09:43:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:43:INFO] Sniff delimiter as ',' [2020-04-14:14:09:43:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:44:INFO] Sniff delimiter as ',' [2020-04-14:14:09:44:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:46:INFO] Sniff delimiter as ',' [2020-04-14:14:09:46:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:46:INFO] Sniff delimiter as ',' [2020-04-14:14:09:46:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:46:INFO] Sniff delimiter as ',' [2020-04-14:14:09:46:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:46:INFO] Sniff delimiter as ',' [2020-04-14:14:09:46:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:46:INFO] Sniff delimiter as ',' [2020-04-14:14:09:46:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:46:INFO] Sniff delimiter as ',' [2020-04-14:14:09:46:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:46:INFO] Sniff delimiter as ',' [2020-04-14:14:09:46:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:46:INFO] Sniff delimiter as ',' [2020-04-14:14:09:46:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:48:INFO] Sniff delimiter as ',' [2020-04-14:14:09:48:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:48:INFO] Sniff delimiter as ',' [2020-04-14:14:09:48:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:48:INFO] Sniff delimiter as ',' [2020-04-14:14:09:48:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:48:INFO] Sniff delimiter as ',' [2020-04-14:14:09:48:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:48:INFO] Sniff delimiter as ',' [2020-04-14:14:09:48:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:48:INFO] Sniff delimiter as ',' [2020-04-14:14:09:48:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:49:INFO] Sniff delimiter as ',' [2020-04-14:14:09:49:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:49:INFO] Sniff delimiter as ',' [2020-04-14:14:09:49:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:51:INFO] Sniff delimiter as ',' [2020-04-14:14:09:51:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:51:INFO] Sniff delimiter as ',' [2020-04-14:14:09:51:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:51:INFO] Sniff delimiter as ',' [2020-04-14:14:09:51:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:51:INFO] Sniff delimiter as ',' [2020-04-14:14:09:51:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:51:INFO] Sniff delimiter as ',' [2020-04-14:14:09:51:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:51:INFO] Sniff delimiter as ',' [2020-04-14:14:09:51:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:53:INFO] Sniff delimiter as ',' [2020-04-14:14:09:53:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:53:INFO] Sniff delimiter as ',' [2020-04-14:14:09:53:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:53:INFO] Sniff delimiter as ',' [2020-04-14:14:09:53:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:54:INFO] Sniff delimiter as ',' [2020-04-14:14:09:54:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:53:INFO] Sniff delimiter as ',' [2020-04-14:14:09:53:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:53:INFO] Sniff delimiter as ',' [2020-04-14:14:09:53:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:53:INFO] Sniff delimiter as ',' [2020-04-14:14:09:53:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:54:INFO] Sniff delimiter as ',' [2020-04-14:14:09:54:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:58:INFO] Sniff delimiter as ',' [2020-04-14:14:09:58:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:58:INFO] Sniff delimiter as ',' [2020-04-14:14:09:58:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:58:INFO] Sniff delimiter as ',' [2020-04-14:14:09:58:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:59:INFO] Sniff delimiter as ',' [2020-04-14:14:09:59:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:58:INFO] Sniff delimiter as ',' [2020-04-14:14:09:58:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:58:INFO] Sniff delimiter as ',' [2020-04-14:14:09:58:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:58:INFO] Sniff delimiter as ',' [2020-04-14:14:09:58:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:09:59:INFO] Sniff delimiter as ',' [2020-04-14:14:09:59:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:00:INFO] Sniff delimiter as ',' [2020-04-14:14:10:00:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:00:INFO] Sniff delimiter as ',' [2020-04-14:14:10:00:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:01:INFO] Sniff delimiter as ',' [2020-04-14:14:10:01:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:00:INFO] Sniff delimiter as ',' [2020-04-14:14:10:00:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:00:INFO] Sniff delimiter as ',' [2020-04-14:14:10:00:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:01:INFO] Sniff delimiter as ',' [2020-04-14:14:10:01:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:01:INFO] Sniff delimiter as ',' [2020-04-14:14:10:01:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:01:INFO] Sniff delimiter as ',' [2020-04-14:14:10:01:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:03:INFO] Sniff delimiter as ',' [2020-04-14:14:10:03:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:03:INFO] Sniff delimiter as ',' [2020-04-14:14:10:03:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:03:INFO] Sniff delimiter as ',' [2020-04-14:14:10:03:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:03:INFO] Sniff delimiter as ',' [2020-04-14:14:10:03:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:03:INFO] Sniff delimiter as ',' [2020-04-14:14:10:03:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:03:INFO] Sniff delimiter as ',' [2020-04-14:14:10:03:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:03:INFO] Sniff delimiter as ',' [2020-04-14:14:10:03:INFO] Determined delimiter of CSV input is ',' [2020-04-14:14:10:03:INFO] Sniff delimiter as ',' [2020-04-14:14:10:03:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.4 KiB (3.5 MiB/s) with 1 file(s) remaining Completed 366.4 KiB/366.4 KiB (4.9 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-west-1-515519275483/xgboost-2020-04-14-14-06-00-523/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. #test_X = None test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "new-xgb-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "New-XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output -------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-09-30 19:09:47 Starting - Starting the training job... 2020-09-30 19:09:49 Starting - Launching requested ML instances...... 2020-09-30 19:10:55 Starting - Preparing the instances for training...... 2020-09-30 19:12:05 Downloading - Downloading input data 2020-09-30 19:12:05 Training - Downloading the training image..Arguments: train [2020-09-30:19:12:28:INFO] Running standalone xgboost training. [2020-09-30:19:12:28:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8491.56mb [2020-09-30:19:12:28:INFO] Determined delimiter of CSV input is ',' [19:12:28] S3DistributionType set as FullyReplicated [19:12:29] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-09-30:19:12:29:INFO] Determined delimiter of CSV input is ',' [19:12:29] S3DistributionType set as FullyReplicated [19:12:31] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, 2020-09-30 19:12:27 Training - Training image download completed. Training in progress.[19:12:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.301#011validation-error:0.2935 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [19:12:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [1]#011train-error:0.288#011validation-error:0.2829 [19:12:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.2832#011validation-error:0.2791 [19:12:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.266067#011validation-error:0.2649 [19:12:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [4]#011train-error:0.2602#011validation-error:0.2616 [19:12:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.258#011validation-error:0.2609 [19:12:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.257067#011validation-error:0.2589 [19:12:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.236267#011validation-error:0.2397 [19:12:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 0 pruned nodes, max_depth=5 [8]#011train-error:0.232733#011validation-error:0.239 [19:12:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [9]#011train-error:0.2244#011validation-error:0.2309 [19:12:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [10]#011train-error:0.222867#011validation-error:0.2304 [19:12:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [11]#011train-error:0.2198#011validation-error:0.2266 [19:12:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 12 pruned nodes, max_depth=5 [12]#011train-error:0.211667#011validation-error:0.221 [19:12:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [13]#011train-error:0.210333#011validation-error:0.2184 [19:12:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.206933#011validation-error:0.2135 [19:12:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [15]#011train-error:0.2022#011validation-error:0.2123 [19:12:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.198733#011validation-error:0.2077 [19:12:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [17]#011train-error:0.194667#011validation-error:0.2067 [19:12:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [18]#011train-error:0.194533#011validation-error:0.2065 [19:12:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [19]#011train-error:0.1908#011validation-error:0.2029 [19:13:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [20]#011train-error:0.187867#011validation-error:0.2001 [19:13:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.185933#011validation-error:0.1977 [19:13:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [22]#011train-error:0.183067#011validation-error:0.1964 [19:13:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [23]#011train-error:0.180867#011validation-error:0.1972 [19:13:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [24]#011train-error:0.178733#011validation-error:0.1925 [19:13:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 18 pruned nodes, max_depth=5 [25]#011train-error:0.1776#011validation-error:0.1907 [19:13:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [26]#011train-error:0.1756#011validation-error:0.1899 [19:13:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 18 pruned nodes, max_depth=5 [27]#011train-error:0.1732#011validation-error:0.189 [19:13:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [28]#011train-error:0.171867#011validation-error:0.188 [19:13:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5 [29]#011train-error:0.1698#011validation-error:0.1869 [19:13:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 0 pruned nodes, max_depth=5 [30]#011train-error:0.168#011validation-error:0.1852 [19:13:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [31]#011train-error:0.1652#011validation-error:0.1835 [19:13:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.1632#011validation-error:0.1833 [19:13:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [33]#011train-error:0.1614#011validation-error:0.1818 [19:13:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.157667#011validation-error:0.1801 [19:13:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.157467#011validation-error:0.1802 [19:13:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [36]#011train-error:0.155133#011validation-error:0.1784 [19:13:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [37]#011train-error:0.1542#011validation-error:0.1778 [19:13:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.152333#011validation-error:0.1763 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ............................Arguments: serve [2020-09-30 19:21:05 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-30 19:21:05 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-30 19:21:05 +0000] [1] [INFO] Using worker: gevent [2020-09-30 19:21:05 +0000] [36] [INFO] Booting worker with pid: 36 [2020-09-30 19:21:05 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-30:19:21:05:INFO] Model loaded successfully for worker : 36 [2020-09-30:19:21:05:INFO] Model loaded successfully for worker : 37 [2020-09-30 19:21:05 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-30 19:21:05 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-30:19:21:05:INFO] Model loaded successfully for worker : 38 [2020-09-30:19:21:05:INFO] Model loaded successfully for worker : 39 [2020-09-30:19:21:05:INFO] Sniff delimiter as ',' [2020-09-30:19:21:05:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:05:INFO] Sniff delimiter as ',' [2020-09-30:19:21:05:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:05:INFO] Sniff delimiter as ',' [2020-09-30:19:21:05:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:07:INFO] Sniff delimiter as ',' [2020-09-30:19:21:07:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:07:INFO] Sniff delimiter as ',' [2020-09-30:19:21:07:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:08:INFO] Sniff delimiter as ',' [2020-09-30:19:21:08:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:08:INFO] Sniff delimiter as ',' [2020-09-30:19:21:08:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:09:INFO] Sniff delimiter as ',' [2020-09-30:19:21:09:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:09:INFO] Sniff delimiter as ',' [2020-09-30:19:21:09:INFO] Determined delimiter of CSV input is ',' 2020-09-30T19:21:05.305:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-09-30:19:21:11:INFO] Sniff delimiter as ',' [2020-09-30:19:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:11:INFO] Sniff delimiter as ',' [2020-09-30:19:21:11:INFO] Sniff delimiter as ',' [2020-09-30:19:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:11:INFO] Sniff delimiter as ',' [2020-09-30:19:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:11:INFO] Sniff delimiter as ',' [2020-09-30:19:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:11:INFO] Sniff delimiter as ',' [2020-09-30:19:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:11:INFO] Sniff delimiter as ',' [2020-09-30:19:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:11:INFO] Sniff delimiter as ',' [2020-09-30:19:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:13:INFO] Sniff delimiter as ',' [2020-09-30:19:21:13:INFO] Sniff delimiter as ',' [2020-09-30:19:21:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:13:INFO] Sniff delimiter as ',' [2020-09-30:19:21:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:13:INFO] Sniff delimiter as ',' [2020-09-30:19:21:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:14:INFO] Sniff delimiter as ',' [2020-09-30:19:21:14:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:13:INFO] Sniff delimiter as ',' [2020-09-30:19:21:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:13:INFO] Sniff delimiter as ',' [2020-09-30:19:21:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:14:INFO] Sniff delimiter as ',' [2020-09-30:19:21:14:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:16:INFO] Sniff delimiter as ',' [2020-09-30:19:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:16:INFO] Sniff delimiter as ',' [2020-09-30:19:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:16:INFO] Sniff delimiter as ',' [2020-09-30:19:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:16:INFO] Sniff delimiter as ',' [2020-09-30:19:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:16:INFO] Sniff delimiter as ',' [2020-09-30:19:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:16:INFO] Sniff delimiter as ',' [2020-09-30:19:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:16:INFO] Sniff delimiter as ',' [2020-09-30:19:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:16:INFO] Sniff delimiter as ',' [2020-09-30:19:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:18:INFO] Sniff delimiter as ',' [2020-09-30:19:21:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:18:INFO] Sniff delimiter as ',' [2020-09-30:19:21:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:18:INFO] Sniff delimiter as ',' [2020-09-30:19:21:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:18:INFO] Sniff delimiter as ',' [2020-09-30:19:21:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:18:INFO] Sniff delimiter as ',' [2020-09-30:19:21:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:18:INFO] Sniff delimiter as ',' [2020-09-30:19:21:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:18:INFO] Sniff delimiter as ',' [2020-09-30:19:21:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:18:INFO] Sniff delimiter as ',' [2020-09-30:19:21:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:20:INFO] Sniff delimiter as ',' [2020-09-30:19:21:20:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:20:INFO] Sniff delimiter as ',' [2020-09-30:19:21:20:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:21:INFO] Sniff delimiter as ',' [2020-09-30:19:21:21:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:21:INFO] Sniff delimiter as ',' [2020-09-30:19:21:21:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:21:INFO] Sniff delimiter as ',' [2020-09-30:19:21:21:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:21:INFO] Sniff delimiter as ',' [2020-09-30:19:21:21:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:23:INFO] Sniff delimiter as ',' [2020-09-30:19:21:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:23:INFO] Sniff delimiter as ',' [2020-09-30:19:21:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:23:INFO] Sniff delimiter as ',' [2020-09-30:19:21:23:INFO] Sniff delimiter as ',' [2020-09-30:19:21:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:23:INFO] Sniff delimiter as ',' [2020-09-30:19:21:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:23:INFO] Sniff delimiter as ',' [2020-09-30:19:21:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:23:INFO] Sniff delimiter as ',' [2020-09-30:19:21:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:23:INFO] Sniff delimiter as ',' [2020-09-30:19:21:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:25:INFO] Sniff delimiter as ',' [2020-09-30:19:21:25:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:25:INFO] Sniff delimiter as ',' [2020-09-30:19:21:25:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:25:INFO] Sniff delimiter as ',' [2020-09-30:19:21:25:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:26:INFO] Sniff delimiter as ',' [2020-09-30:19:21:26:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:25:INFO] Sniff delimiter as ',' [2020-09-30:19:21:25:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:26:INFO] Sniff delimiter as ',' [2020-09-30:19:21:26:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:26:INFO] Sniff delimiter as ',' [2020-09-30:19:21:26:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:26:INFO] Sniff delimiter as ',' [2020-09-30:19:21:26:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:28:INFO] Sniff delimiter as ',' [2020-09-30:19:21:28:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:28:INFO] Sniff delimiter as ',' [2020-09-30:19:21:28:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:28:INFO] Sniff delimiter as ',' [2020-09-30:19:21:28:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:28:INFO] Sniff delimiter as ',' [2020-09-30:19:21:28:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:28:INFO] Sniff delimiter as ',' [2020-09-30:19:21:28:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:21:28:INFO] Sniff delimiter as ',' [2020-09-30:19:21:28:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.7 KiB (2.8 MiB/s) with 1 file(s) remaining Completed 369.7 KiB/369.7 KiB (4.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-428747017283/xgboost-2020-09-30-19-16-32-138/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer( vocabulary= vocabulary, preprocessor= lambda x: x, tokenizer= lambda x:x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output .........................2020-09-30T19:35:35.522:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-09-30 19:35:35 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-30 19:35:35 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-30 19:35:35 +0000] [1] [INFO] Using worker: gevent [2020-09-30 19:35:35 +0000] [36] [INFO] Booting worker with pid: 36 [2020-09-30 19:35:35 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-30:19:35:35:INFO] Model loaded successfully for worker : 36 Arguments: serve [2020-09-30 19:35:35 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-30 19:35:35 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-30 19:35:35 +0000] [1] [INFO] Using worker: gevent [2020-09-30 19:35:35 +0000] [36] [INFO] Booting worker with pid: 36 [2020-09-30 19:35:35 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-30:19:35:35:INFO] Model loaded successfully for worker : 36 [2020-09-30 19:35:35 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-30:19:35:35:INFO] Model loaded successfully for worker : 37 [2020-09-30 19:35:35 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-30:19:35:35:INFO] Model loaded successfully for worker : 38 [2020-09-30:19:35:35:INFO] Model loaded successfully for worker : 39 [2020-09-30:19:35:35:INFO] Sniff delimiter as ',' [2020-09-30:19:35:35:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:35:INFO] Sniff delimiter as ',' [2020-09-30:19:35:35:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:36:INFO] Sniff delimiter as ',' [2020-09-30:19:35:36:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:36:INFO] Sniff delimiter as ',' [2020-09-30:19:35:36:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:38:INFO] Sniff delimiter as ',' [2020-09-30:19:35:38:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:38:INFO] Sniff delimiter as ',' [2020-09-30:19:35:38:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:38:INFO] Sniff delimiter as ',' [2020-09-30:19:35:38:INFO] Determined delimiter of CSV input is ',' [2020-09-30 19:35:35 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-30:19:35:35:INFO] Model loaded successfully for worker : 37 [2020-09-30 19:35:35 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-30:19:35:35:INFO] Model loaded successfully for worker : 38 [2020-09-30:19:35:35:INFO] Model loaded successfully for worker : 39 [2020-09-30:19:35:35:INFO] Sniff delimiter as ',' [2020-09-30:19:35:35:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:35:INFO] Sniff delimiter as ',' [2020-09-30:19:35:35:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:36:INFO] Sniff delimiter as ',' [2020-09-30:19:35:36:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:36:INFO] Sniff delimiter as ',' [2020-09-30:19:35:36:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:38:INFO] Sniff delimiter as ',' [2020-09-30:19:35:38:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:38:INFO] Sniff delimiter as ',' [2020-09-30:19:35:38:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:38:INFO] Sniff delimiter as ',' [2020-09-30:19:35:38:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:38:INFO] Sniff delimiter as ',' [2020-09-30:19:35:38:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:38:INFO] Sniff delimiter as ',' [2020-09-30:19:35:38:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:40:INFO] Sniff delimiter as ',' [2020-09-30:19:35:40:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:40:INFO] Sniff delimiter as ',' [2020-09-30:19:35:40:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:40:INFO] Sniff delimiter as ',' [2020-09-30:19:35:40:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:41:INFO] Sniff delimiter as ',' [2020-09-30:19:35:41:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:41:INFO] Sniff delimiter as ',' [2020-09-30:19:35:41:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:40:INFO] Sniff delimiter as ',' [2020-09-30:19:35:40:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:41:INFO] Sniff delimiter as ',' [2020-09-30:19:35:41:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:41:INFO] Sniff delimiter as ',' [2020-09-30:19:35:41:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:43:INFO] Sniff delimiter as ',' [2020-09-30:19:35:43:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:43:INFO] Sniff delimiter as ',' [2020-09-30:19:35:43:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:45:INFO] Sniff delimiter as ',' [2020-09-30:19:35:45:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:45:INFO] Sniff delimiter as ',' [2020-09-30:19:35:45:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:45:INFO] Sniff delimiter as ',' [2020-09-30:19:35:45:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:45:INFO] Sniff delimiter as ',' [2020-09-30:19:35:45:INFO] Sniff delimiter as ',' [2020-09-30:19:35:45:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:45:INFO] Sniff delimiter as ',' [2020-09-30:19:35:45:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:46:INFO] Sniff delimiter as ',' [2020-09-30:19:35:46:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:45:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:46:INFO] Sniff delimiter as ',' [2020-09-30:19:35:46:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:48:INFO] Sniff delimiter as ',' [2020-09-30:19:35:48:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:48:INFO] Sniff delimiter as ',' [2020-09-30:19:35:48:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:48:INFO] Sniff delimiter as ',' [2020-09-30:19:35:48:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:48:INFO] Sniff delimiter as ',' [2020-09-30:19:35:48:INFO] Sniff delimiter as ',' [2020-09-30:19:35:48:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:48:INFO] Sniff delimiter as ',' [2020-09-30:19:35:48:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:48:INFO] Sniff delimiter as ',' [2020-09-30:19:35:48:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:48:INFO] Sniff delimiter as ',' [2020-09-30:19:35:48:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:48:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:50:INFO] Sniff delimiter as ',' [2020-09-30:19:35:50:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:50:INFO] Sniff delimiter as ',' [2020-09-30:19:35:50:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:50:INFO] Sniff delimiter as ',' [2020-09-30:19:35:50:INFO] Sniff delimiter as ',' [2020-09-30:19:35:50:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:50:INFO] Sniff delimiter as ',' [2020-09-30:19:35:50:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:50:INFO] Sniff delimiter as ',' [2020-09-30:19:35:50:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:50:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:51:INFO] Sniff delimiter as ',' [2020-09-30:19:35:51:INFO] Determined delimiter of CSV input is ',' [2020-09-30:19:35:51:INFO] Sniff delimiter as ',' [2020-09-30:19:35:51:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.9 KiB (2.8 MiB/s) with 1 file(s) remaining Completed 369.9 KiB/369.9 KiB (3.9 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-428747017283/xgboost-2020-09-30-19-31-13-614/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2020-09-30-19-09-47-382 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['film', 'portrait', 'half', 'spastic', 'teenag', 'boy', 'benjamin', 'visit', 'board', 'school', 'lousi', 'mark', 'math', 'make', 'best', 'experi', 'life', 'got', 'seriou', 'self', 'esteem', 'issu', 'rough', 'start', 'new', 'school', 'start', 'make', 'friend', 'fall', 'love', 'girl', 'american', 'pieish', 'teenag', 'stuff', 'besid', 'comedi', 'element', 'film', 'told', 'seriou', 'way', 'focuss', 'benjamin', 'problem', 'alreadi', 'like', 'stori', 'outlin', 'save', 'time', 'watch', 'someth', 'els', 'pleas', 'awar', 'follow', '1', 'benjamin', 'total', 'loser', 'whatev', 'terribl', 'wrong', 'goe', 'self', 'piti', 'time', 'kind', 'charm', 'loser', 'feel', 'sympathi', 'laugh', 'instead', 'behavior', 'realli', 'annoy', 'teenag', 'year', 'far', 'behind', 'could', 'bare', 'stand', 'watch', '2', 'film', 'hardli', 'tri', 'realist', 'stori', 'seem', 'experi', 'charact', 'except', 'janosch', 'mayb', 'ye', 'know', 'film', 'base', 'auto', 'biographi', 'written', '17', 'year', 'old', 'experi', 'german', 'school', 'german', 'youth', 'believ', '3', 'show', 'sexual', 'awaken', 'realli', 'import', 'thing', 'film', 'subject', 'doubt', 'teenag', 'boy', 'ejacul', 'cooki', 'contest', 'everyon', 'hit', 'cooki', 'sperm', 'mass', 'masturb', 'wood', 'loser', 'eat', 'sperm', 'wet', 'cooki', 'afterward', 'although', 'kinda', 'amus', 'contempt', 'way', 'funni', 'neither', 'underlin', 'seriou', 'attempt', 'film', '4', 'sub', 'plot', 'benjamin', 'famili', 'father', 'betray', 'wife', 'still', 'know', 'put', 'bore', 'well', 'person', 'hate', 'film', 'charact', 'benjamin', 'without', 'messag', 'concept', 'scheme', 'whatev', 'fail', 'attempt', 'dramat', 'seriou', 'howev', 'imag', 'peopl', 'may', 'find', 'sensibl', 'touch', 'like', 'sister', 'probabl', 'like', 'one', 'hate', '17', 'year', 'old', 'boy', 'write', 'autobiographi', 'seem', 'best', 'idea', 'make', 'film', '2', '10', 'banana'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:484: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'ingrid', 'victorian', 'optimist', 'spill', 'handicap', 'jill', 'playboy', 'modest'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'vastli', 'banana', 'dubiou', 'scarfac', '21st', 'verg', 'omin', 'mice'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. # And then set the algorithm specific parameters. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train':s3_new_input_train, 'validation':s3_input_validation}) ###Output 2020-09-30 20:04:31 Starting - Starting the training job... 2020-09-30 20:04:34 Starting - Launching requested ML instances...... 2020-09-30 20:05:40 Starting - Preparing the instances for training...... 2020-09-30 20:06:45 Downloading - Downloading input data... 2020-09-30 20:07:30 Training - Training image download completed. Training in progress..Arguments: train [2020-09-30:20:07:30:INFO] Running standalone xgboost training. [2020-09-30:20:07:30:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8470.88mb [2020-09-30:20:07:30:INFO] Determined delimiter of CSV input is ',' [20:07:30] S3DistributionType set as FullyReplicated [20:07:32] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-09-30:20:07:32:INFO] Determined delimiter of CSV input is ',' [20:07:32] S3DistributionType set as FullyReplicated [20:07:33] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [20:07:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 0 pruned nodes, max_depth=5 [0]#011train-error:0.318067#011validation-error:0.3789 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [20:07:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 10 pruned nodes, max_depth=5 [1]#011train-error:0.299#011validation-error:0.3786 [20:07:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [2]#011train-error:0.28#011validation-error:0.3633 [20:07:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 2 pruned nodes, max_depth=5 [3]#011train-error:0.276733#011validation-error:0.3636 [20:07:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.2678#011validation-error:0.3795 [20:07:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [5]#011train-error:0.265#011validation-error:0.3792 [20:07:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5 [6]#011train-error:0.255867#011validation-error:0.379 [20:07:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.252267#011validation-error:0.368 [20:07:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.243333#011validation-error:0.3637 [20:07:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [9]#011train-error:0.237467#011validation-error:0.3633 [20:07:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [10]#011train-error:0.232733#011validation-error:0.3501 [20:07:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.228733#011validation-error:0.35 [20:07:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [12]#011train-error:0.222933#011validation-error:0.3364 [20:07:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [13]#011train-error:0.2202#011validation-error:0.3389 [20:07:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 12 pruned nodes, max_depth=5 [14]#011train-error:0.214#011validation-error:0.33 [20:07:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [15]#011train-error:0.211467#011validation-error:0.3248 [20:07:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [16]#011train-error:0.210267#011validation-error:0.3249 [20:07:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [17]#011train-error:0.208733#011validation-error:0.3249 [20:08:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [18]#011train-error:0.204867#011validation-error:0.3244 [20:08:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [19]#011train-error:0.202267#011validation-error:0.3325 [20:08:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [20]#011train-error:0.199867#011validation-error:0.3253 [20:08:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 12 pruned nodes, max_depth=5 [21]#011train-error:0.197#011validation-error:0.3295 [20:08:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [22]#011train-error:0.1954#011validation-error:0.3248 [20:08:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.193733#011validation-error:0.3232 [20:08:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.192867#011validation-error:0.3235 [20:08:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [25]#011train-error:0.190067#011validation-error:0.3212 [20:08:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [26]#011train-error:0.1876#011validation-error:0.3163 [20:08:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [27]#011train-error:0.186733#011validation-error:0.3182 [20:08:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.187#011validation-error:0.317 [20:08:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.1866#011validation-error:0.3126 [20:08:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.182667#011validation-error:0.3114 [20:08:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.179267#011validation-error:0.3107 [20:08:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.177667#011validation-error:0.312 [20:08:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.176667#011validation-error:0.3113 [20:08:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [34]#011train-error:0.175867#011validation-error:0.3101 [20:08:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [35]#011train-error:0.175333#011validation-error:0.3075 [20:08:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [36]#011train-error:0.173#011validation-error:0.3081 [20:08:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [37]#011train-error:0.171467#011validation-error:0.3097 [20:08:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [38]#011train-error:0.169533#011validation-error:0.309 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ..............................2020-09-30T20:16:08.121:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-09-30 20:16:07 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-30 20:16:07 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-30 20:16:07 +0000] [1] [INFO] Using worker: gevent [2020-09-30 20:16:07 +0000] [37] [INFO] Booting worker with pid: 37 Arguments: serve [2020-09-30 20:16:07 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-30 20:16:07 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-30 20:16:07 +0000] [1] [INFO] Using worker: gevent [2020-09-30 20:16:07 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-30 20:16:08 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-30 20:16:08 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-30 20:16:08 +0000] [40] [INFO] Booting worker with pid: 40 [2020-09-30:20:16:08:INFO] Model loaded successfully for worker : 38 [2020-09-30:20:16:08:INFO] Model loaded successfully for worker : 37 [2020-09-30:20:16:08:INFO] Model loaded successfully for worker : 39 [2020-09-30:20:16:08:INFO] Model loaded successfully for worker : 40 [2020-09-30:20:16:08:INFO] Sniff delimiter as ',' [2020-09-30:20:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:08:INFO] Sniff delimiter as ',' [2020-09-30:20:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:08:INFO] Sniff delimiter as ',' [2020-09-30:20:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:08:INFO] Sniff delimiter as ',' [2020-09-30:20:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:11:INFO] Sniff delimiter as ',' [2020-09-30:20:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:11:INFO] Sniff delimiter as ',' [2020-09-30:20:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:11:INFO] Sniff delimiter as ',' [2020-09-30:20:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:11:INFO] Sniff delimiter as ',' [2020-09-30 20:16:08 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-30 20:16:08 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-30 20:16:08 +0000] [40] [INFO] Booting worker with pid: 40 [2020-09-30:20:16:08:INFO] Model loaded successfully for worker : 38 [2020-09-30:20:16:08:INFO] Model loaded successfully for worker : 37 [2020-09-30:20:16:08:INFO] Model loaded successfully for worker : 39 [2020-09-30:20:16:08:INFO] Model loaded successfully for worker : 40 [2020-09-30:20:16:08:INFO] Sniff delimiter as ',' [2020-09-30:20:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:08:INFO] Sniff delimiter as ',' [2020-09-30:20:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:08:INFO] Sniff delimiter as ',' [2020-09-30:20:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:08:INFO] Sniff delimiter as ',' [2020-09-30:20:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:11:INFO] Sniff delimiter as ',' [2020-09-30:20:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:11:INFO] Sniff delimiter as ',' [2020-09-30:20:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:11:INFO] Sniff delimiter as ',' [2020-09-30:20:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:11:INFO] Sniff delimiter as ',' [2020-09-30:20:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:13:INFO] Sniff delimiter as ',' [2020-09-30:20:16:13:INFO] Sniff delimiter as ',' [2020-09-30:20:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:13:INFO] Sniff delimiter as ',' [2020-09-30:20:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:13:INFO] Sniff delimiter as ',' [2020-09-30:20:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:13:INFO] Sniff delimiter as ',' [2020-09-30:20:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:13:INFO] Sniff delimiter as ',' [2020-09-30:20:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:13:INFO] Sniff delimiter as ',' [2020-09-30:20:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:13:INFO] Sniff delimiter as ',' [2020-09-30:20:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:15:INFO] Sniff delimiter as ',' [2020-09-30:20:16:15:INFO] Sniff delimiter as ',' [2020-09-30:20:16:15:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:15:INFO] Sniff delimiter as ',' [2020-09-30:20:16:15:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:16:INFO] Sniff delimiter as ',' [2020-09-30:20:16:16:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:16:INFO] Sniff delimiter as ',' [2020-09-30:20:16:16:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:15:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:15:INFO] Sniff delimiter as ',' [2020-09-30:20:16:15:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:16:INFO] Sniff delimiter as ',' [2020-09-30:20:16:16:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:16:INFO] Sniff delimiter as ',' [2020-09-30:20:16:16:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:18:INFO] Sniff delimiter as ',' [2020-09-30:20:16:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:18:INFO] Sniff delimiter as ',' [2020-09-30:20:16:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:18:INFO] Sniff delimiter as ',' [2020-09-30:20:16:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:18:INFO] Sniff delimiter as ',' [2020-09-30:20:16:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:18:INFO] Sniff delimiter as ',' [2020-09-30:20:16:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:18:INFO] Sniff delimiter as ',' [2020-09-30:20:16:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:18:INFO] Sniff delimiter as ',' [2020-09-30:20:16:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:18:INFO] Sniff delimiter as ',' [2020-09-30:20:16:18:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:21:INFO] Sniff delimiter as ',' [2020-09-30:20:16:21:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:21:INFO] Sniff delimiter as ',' [2020-09-30:20:16:21:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:23:INFO] Sniff delimiter as ',' [2020-09-30:20:16:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:23:INFO] Sniff delimiter as ',' [2020-09-30:20:16:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:23:INFO] Sniff delimiter as ',' [2020-09-30:20:16:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:23:INFO] Sniff delimiter as ',' [2020-09-30:20:16:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:23:INFO] Sniff delimiter as ',' [2020-09-30:20:16:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:23:INFO] Sniff delimiter as ',' [2020-09-30:20:16:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:23:INFO] Sniff delimiter as ',' [2020-09-30:20:16:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:23:INFO] Sniff delimiter as ',' [2020-09-30:20:16:23:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:25:INFO] Sniff delimiter as ',' [2020-09-30:20:16:25:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:25:INFO] Sniff delimiter as ',' [2020-09-30:20:16:25:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:25:INFO] Sniff delimiter as ',' [2020-09-30:20:16:25:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:26:INFO] Sniff delimiter as ',' [2020-09-30:20:16:26:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:25:INFO] Sniff delimiter as ',' [2020-09-30:20:16:25:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:25:INFO] Sniff delimiter as ',' [2020-09-30:20:16:25:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:25:INFO] Sniff delimiter as ',' [2020-09-30:20:16:25:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:26:INFO] Sniff delimiter as ',' [2020-09-30:20:16:26:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:27:INFO] Sniff delimiter as ',' [2020-09-30:20:16:27:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:28:INFO] Sniff delimiter as ',' [2020-09-30:20:16:28:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:28:INFO] Sniff delimiter as ',' [2020-09-30:20:16:28:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:27:INFO] Sniff delimiter as ',' [2020-09-30:20:16:27:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:28:INFO] Sniff delimiter as ',' [2020-09-30:20:16:28:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:28:INFO] Sniff delimiter as ',' [2020-09-30:20:16:28:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:28:INFO] Sniff delimiter as ',' [2020-09-30:20:16:28:INFO] Determined delimiter of CSV input is ',' [2020-09-30:20:16:28:INFO] Sniff delimiter as ',' [2020-09-30:20:16:28:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/365.8 KiB (2.7 MiB/s) with 1 file(s) remaining Completed 365.8 KiB/365.8 KiB (3.8 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-428747017283/xgboost-2020-09-30-20-11-24-291/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output -----------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2019-10-21 19:23:24-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 10.4MB/s in 11s 2019-10-21 19:23:35 (7.41 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) pd.DataFrame(test_y).to_csv(os.path.join(data_dir, 'test_y.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) test_y_location = session.upload_data(os.path.join(data_dir, 'test_y.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) print(test_location) ###Output s3://sagemaker-eu-central-1-668160054588/sentiment-update/test.csv ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2019-10-21 19:39:11 Starting - Starting the training job... 2019-10-21 19:39:14 Starting - Launching requested ML instances...... 2019-10-21 19:40:12 Starting - Preparing the instances for training... 2019-10-21 19:40:53 Downloading - Downloading input data... 2019-10-21 19:41:38 Training - Training image download completed. Training in progress..Arguments: train [2019-10-21:19:41:39:INFO] Running standalone xgboost training. [2019-10-21:19:41:39:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8588.7mb [2019-10-21:19:41:39:INFO] Determined delimiter of CSV input is ',' [19:41:39] S3DistributionType set as FullyReplicated [19:41:41] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2019-10-21:19:41:41:INFO] Determined delimiter of CSV input is ',' [19:41:41] S3DistributionType set as FullyReplicated [19:41:42] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [19:41:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [0]#011train-error:0.2964#011validation-error:0.3052 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [19:41:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 10 pruned nodes, max_depth=5 [1]#011train-error:0.2778#011validation-error:0.2872 [19:41:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 0 pruned nodes, max_depth=5 [2]#011train-error:0.277533#011validation-error:0.2844 [19:41:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [3]#011train-error:0.2774#011validation-error:0.2865 [19:41:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.266933#011validation-error:0.2752 [19:41:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 0 pruned nodes, max_depth=5 [5]#011train-error:0.258733#011validation-error:0.2661 [19:41:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [6]#011train-error:0.245867#011validation-error:0.2573 [19:41:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.237933#011validation-error:0.2502 [19:41:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [8]#011train-error:0.2312#011validation-error:0.2441 [19:41:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [9]#011train-error:0.229267#011validation-error:0.2428 [19:41:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [10]#011train-error:0.227933#011validation-error:0.238 [19:42:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.222933#011validation-error:0.234 [19:42:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [12]#011train-error:0.2146#011validation-error:0.2277 [19:42:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [13]#011train-error:0.2102#011validation-error:0.2242 [19:42:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [14]#011train-error:0.2068#011validation-error:0.2204 [19:42:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [15]#011train-error:0.206333#011validation-error:0.2185 [19:42:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.201467#011validation-error:0.2173 [19:42:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [17]#011train-error:0.1984#011validation-error:0.2147 [19:42:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.1934#011validation-error:0.211 [19:42:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.191867#011validation-error:0.2089 [19:42:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 0 pruned nodes, max_depth=5 [20]#011train-error:0.188867#011validation-error:0.2057 [19:42:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.185067#011validation-error:0.2043 [19:42:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.181467#011validation-error:0.2021 [19:42:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [23]#011train-error:0.179333#011validation-error:0.1985 [19:42:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.1776#011validation-error:0.1966 [19:42:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5 [25]#011train-error:0.176#011validation-error:0.1944 [19:42:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [26]#011train-error:0.1738#011validation-error:0.1937 [19:42:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [27]#011train-error:0.172467#011validation-error:0.1933 [19:42:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [28]#011train-error:0.1706#011validation-error:0.1912 [19:42:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [29]#011train-error:0.166067#011validation-error:0.1899 [19:42:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.163933#011validation-error:0.1887 [19:42:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [31]#011train-error:0.1632#011validation-error:0.1873 [19:42:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.163#011validation-error:0.1858 [19:42:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [33]#011train-error:0.160733#011validation-error:0.1841 [19:42:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [34]#011train-error:0.158867#011validation-error:0.1838 [19:42:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5 [35]#011train-error:0.1578#011validation-error:0.1825 [19:42:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [36]#011train-error:0.156667#011validation-error:0.1803 [19:42:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [37]#011train-error:0.154933#011validation-error:0.1784 [19:42:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [38]#011train-error:0.153333#011validation-error:0.1783 [19:42:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.1516#011validation-error:0.1778 [19:42:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [40]#011train-error:0.150933#011validation-error:0.1765 [19:42:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [41]#011train-error:0.150467#011validation-error:0.1776 [19:42:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [42]#011train-error:0.1496#011validation-error:0.1762 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ...................Arguments: serve [2019-10-21 19:48:00 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2019-10-21 19:48:00 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2019-10-21 19:48:00 +0000] [1] [INFO] Using worker: gevent [2019-10-21 19:48:00 +0000] [37] [INFO] Booting worker with pid: 37 [2019-10-21 19:48:00 +0000] [38] [INFO] Booting worker with pid: 38 [2019-10-21 19:48:00 +0000] [39] [INFO] Booting worker with pid: 39 Arguments: serve [2019-10-21 19:48:00 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2019-10-21 19:48:00 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2019-10-21 19:48:00 +0000] [1] [INFO] Using worker: gevent [2019-10-21 19:48:00 +0000] [37] [INFO] Booting worker with pid: 37 [2019-10-21 19:48:00 +0000] [38] [INFO] Booting worker with pid: 38 [2019-10-21 19:48:00 +0000] [39] [INFO] Booting worker with pid: 39 [2019-10-21 19:48:00 +0000] [40] [INFO] Booting worker with pid: 40 [2019-10-21:19:48:00:INFO] Model loaded successfully for worker : 37 [2019-10-21:19:48:00:INFO] Model loaded successfully for worker : 39 [2019-10-21:19:48:00:INFO] Model loaded successfully for worker : 38 [2019-10-21:19:48:00:INFO] Model loaded successfully for worker : 40 [2019-10-21 19:48:00 +0000] [40] [INFO] Booting worker with pid: 40 [2019-10-21:19:48:00:INFO] Model loaded successfully for worker : 37 [2019-10-21:19:48:00:INFO] Model loaded successfully for worker : 39 [2019-10-21:19:48:00:INFO] Model loaded successfully for worker : 38 [2019-10-21:19:48:00:INFO] Model loaded successfully for worker : 40 2019-10-21T19:48:04.992:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2019-10-21:19:48:07:INFO] Sniff delimiter as ',' [2019-10-21:19:48:07:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:07:INFO] Sniff delimiter as ',' [2019-10-21:19:48:07:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:08:INFO] Sniff delimiter as ',' [2019-10-21:19:48:08:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:07:INFO] Sniff delimiter as ',' [2019-10-21:19:48:07:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:07:INFO] Sniff delimiter as ',' [2019-10-21:19:48:07:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:08:INFO] Sniff delimiter as ',' [2019-10-21:19:48:08:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:08:INFO] Sniff delimiter as ',' [2019-10-21:19:48:08:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:08:INFO] Sniff delimiter as ',' [2019-10-21:19:48:08:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:10:INFO] Sniff delimiter as ',' [2019-10-21:19:48:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:10:INFO] Sniff delimiter as ',' [2019-10-21:19:48:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:10:INFO] Sniff delimiter as ',' [2019-10-21:19:48:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:10:INFO] Sniff delimiter as ',' [2019-10-21:19:48:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:10:INFO] Sniff delimiter as ',' [2019-10-21:19:48:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:10:INFO] Sniff delimiter as ',' [2019-10-21:19:48:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:10:INFO] Sniff delimiter as ',' [2019-10-21:19:48:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:10:INFO] Sniff delimiter as ',' [2019-10-21:19:48:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:15:INFO] Sniff delimiter as ',' [2019-10-21:19:48:15:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:15:INFO] Sniff delimiter as ',' [2019-10-21:19:48:15:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:15:INFO] Sniff delimiter as ',' [2019-10-21:19:48:15:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:15:INFO] Sniff delimiter as ',' [2019-10-21:19:48:15:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:15:INFO] Sniff delimiter as ',' [2019-10-21:19:48:15:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:15:INFO] Sniff delimiter as ',' [2019-10-21:19:48:15:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:15:INFO] Sniff delimiter as ',' [2019-10-21:19:48:15:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:15:INFO] Sniff delimiter as ',' [2019-10-21:19:48:15:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:17:INFO] Sniff delimiter as ',' [2019-10-21:19:48:17:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:17:INFO] Sniff delimiter as ',' [2019-10-21:19:48:17:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:17:INFO] Sniff delimiter as ',' [2019-10-21:19:48:17:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:18:INFO] Sniff delimiter as ',' [2019-10-21:19:48:18:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:17:INFO] Sniff delimiter as ',' [2019-10-21:19:48:17:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:17:INFO] Sniff delimiter as ',' [2019-10-21:19:48:17:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:17:INFO] Sniff delimiter as ',' [2019-10-21:19:48:17:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:18:INFO] Sniff delimiter as ',' [2019-10-21:19:48:18:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:20:INFO] Sniff delimiter as ',' [2019-10-21:19:48:20:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:20:INFO] Sniff delimiter as ',' [2019-10-21:19:48:20:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:20:INFO] Sniff delimiter as ',' [2019-10-21:19:48:20:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:20:INFO] Sniff delimiter as ',' [2019-10-21:19:48:20:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:20:INFO] Sniff delimiter as ',' [2019-10-21:19:48:20:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:20:INFO] Sniff delimiter as ',' [2019-10-21:19:48:20:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:20:INFO] Sniff delimiter as ',' [2019-10-21:19:48:20:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:20:INFO] Sniff delimiter as ',' [2019-10-21:19:48:20:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:22:INFO] Sniff delimiter as ',' [2019-10-21:19:48:22:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:22:INFO] Sniff delimiter as ',' [2019-10-21:19:48:22:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:22:INFO] Sniff delimiter as ',' [2019-10-21:19:48:22:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:22:INFO] Sniff delimiter as ',' [2019-10-21:19:48:22:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:24:INFO] Sniff delimiter as ',' [2019-10-21:19:48:24:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:24:INFO] Sniff delimiter as ',' [2019-10-21:19:48:24:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:24:INFO] Sniff delimiter as ',' [2019-10-21:19:48:24:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:25:INFO] Sniff delimiter as ',' [2019-10-21:19:48:25:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:25:INFO] Sniff delimiter as ',' [2019-10-21:19:48:25:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:24:INFO] Sniff delimiter as ',' [2019-10-21:19:48:24:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:25:INFO] Sniff delimiter as ',' [2019-10-21:19:48:25:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:25:INFO] Sniff delimiter as ',' [2019-10-21:19:48:25:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:27:INFO] Sniff delimiter as ',' [2019-10-21:19:48:27:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:27:INFO] Sniff delimiter as ',' [2019-10-21:19:48:27:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:27:INFO] Sniff delimiter as ',' [2019-10-21:19:48:27:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:27:INFO] Sniff delimiter as ',' [2019-10-21:19:48:27:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:27:INFO] Sniff delimiter as ',' [2019-10-21:19:48:27:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:27:INFO] Sniff delimiter as ',' [2019-10-21:19:48:27:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:27:INFO] Sniff delimiter as ',' [2019-10-21:19:48:27:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:27:INFO] Sniff delimiter as ',' [2019-10-21:19:48:27:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:29:INFO] Sniff delimiter as ',' [2019-10-21:19:48:29:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:29:INFO] Sniff delimiter as ',' [2019-10-21:19:48:29:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:29:INFO] Sniff delimiter as ',' [2019-10-21:19:48:29:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:29:INFO] Sniff delimiter as ',' [2019-10-21:19:48:29:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:29:INFO] Sniff delimiter as ',' [2019-10-21:19:48:29:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:29:INFO] Sniff delimiter as ',' [2019-10-21:19:48:29:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:29:INFO] Sniff delimiter as ',' [2019-10-21:19:48:29:INFO] Determined delimiter of CSV input is ',' [2019-10-21:19:48:29:INFO] Sniff delimiter as ',' [2019-10-21:19:48:29:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/368.7 KiB (2.7 MiB/s) with 1 file(s) remaining Completed 368.7 KiB/368.7 KiB (3.8 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-central-1-668160054588/xgboost-2019-10-21-19-44-55-379/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) print(new_data_location) ###Output s3://sagemaker-eu-central-1-668160054588/sentiment-update/new_data.csv ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ...................Arguments: serve [2019-10-21 20:17:48 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2019-10-21 20:17:48 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2019-10-21 20:17:48 +0000] [1] [INFO] Using worker: gevent [2019-10-21 20:17:48 +0000] [38] [INFO] Booting worker with pid: 38 [2019-10-21 20:17:48 +0000] [39] [INFO] Booting worker with pid: 39 Arguments: serve [2019-10-21 20:17:48 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2019-10-21 20:17:48 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2019-10-21 20:17:48 +0000] [1] [INFO] Using worker: gevent [2019-10-21 20:17:48 +0000] [38] [INFO] Booting worker with pid: 38 [2019-10-21 20:17:48 +0000] [39] [INFO] Booting worker with pid: 39 [2019-10-21 20:17:48 +0000] [40] [INFO] Booting worker with pid: 40 [2019-10-21:20:17:48:INFO] Model loaded successfully for worker : 38 [2019-10-21:20:17:48:INFO] Model loaded successfully for worker : 39 [2019-10-21 20:17:48 +0000] [41] [INFO] Booting worker with pid: 41 [2019-10-21:20:17:48:INFO] Model loaded successfully for worker : 40 [2019-10-21:20:17:48:INFO] Model loaded successfully for worker : 41 [2019-10-21 20:17:48 +0000] [40] [INFO] Booting worker with pid: 40 [2019-10-21:20:17:48:INFO] Model loaded successfully for worker : 38 [2019-10-21:20:17:48:INFO] Model loaded successfully for worker : 39 [2019-10-21 20:17:48 +0000] [41] [INFO] Booting worker with pid: 41 [2019-10-21:20:17:48:INFO] Model loaded successfully for worker : 40 [2019-10-21:20:17:48:INFO] Model loaded successfully for worker : 41 2019-10-21T20:17:52.627:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2019-10-21:20:17:55:INFO] Sniff delimiter as ',' [2019-10-21:20:17:55:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:55:INFO] Sniff delimiter as ',' [2019-10-21:20:17:55:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:55:INFO] Sniff delimiter as ',' [2019-10-21:20:17:55:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:55:INFO] Sniff delimiter as ',' [2019-10-21:20:17:55:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:56:INFO] Sniff delimiter as ',' [2019-10-21:20:17:56:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:55:INFO] Sniff delimiter as ',' [2019-10-21:20:17:55:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:55:INFO] Sniff delimiter as ',' [2019-10-21:20:17:55:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:56:INFO] Sniff delimiter as ',' [2019-10-21:20:17:56:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:58:INFO] Sniff delimiter as ',' [2019-10-21:20:17:58:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:58:INFO] Sniff delimiter as ',' [2019-10-21:20:17:58:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:58:INFO] Sniff delimiter as ',' [2019-10-21:20:17:58:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:58:INFO] Sniff delimiter as ',' [2019-10-21:20:17:58:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:58:INFO] Sniff delimiter as ',' [2019-10-21:20:17:58:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:58:INFO] Sniff delimiter as ',' [2019-10-21:20:17:58:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:58:INFO] Sniff delimiter as ',' [2019-10-21:20:17:58:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:17:58:INFO] Sniff delimiter as ',' [2019-10-21:20:17:58:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:02:INFO] Sniff delimiter as ',' [2019-10-21:20:18:02:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:03:INFO] Sniff delimiter as ',' [2019-10-21:20:18:03:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:03:INFO] Sniff delimiter as ',' [2019-10-21:20:18:03:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:03:INFO] Sniff delimiter as ',' [2019-10-21:20:18:02:INFO] Sniff delimiter as ',' [2019-10-21:20:18:02:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:03:INFO] Sniff delimiter as ',' [2019-10-21:20:18:03:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:03:INFO] Sniff delimiter as ',' [2019-10-21:20:18:03:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:03:INFO] Sniff delimiter as ',' [2019-10-21:20:18:03:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:03:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:05:INFO] Sniff delimiter as ',' [2019-10-21:20:18:05:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:05:INFO] Sniff delimiter as ',' [2019-10-21:20:18:05:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:05:INFO] Sniff delimiter as ',' [2019-10-21:20:18:05:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:05:INFO] Sniff delimiter as ',' [2019-10-21:20:18:05:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:05:INFO] Sniff delimiter as ',' [2019-10-21:20:18:05:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:05:INFO] Sniff delimiter as ',' [2019-10-21:20:18:05:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:05:INFO] Sniff delimiter as ',' [2019-10-21:20:18:05:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:05:INFO] Sniff delimiter as ',' [2019-10-21:20:18:05:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:07:INFO] Sniff delimiter as ',' [2019-10-21:20:18:07:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:07:INFO] Sniff delimiter as ',' [2019-10-21:20:18:07:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:07:INFO] Sniff delimiter as ',' [2019-10-21:20:18:07:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:07:INFO] Sniff delimiter as ',' [2019-10-21:20:18:07:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:07:INFO] Sniff delimiter as ',' [2019-10-21:20:18:07:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:07:INFO] Sniff delimiter as ',' [2019-10-21:20:18:07:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:08:INFO] Sniff delimiter as ',' [2019-10-21:20:18:08:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:08:INFO] Sniff delimiter as ',' [2019-10-21:20:18:08:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:10:INFO] Sniff delimiter as ',' [2019-10-21:20:18:10:INFO] Sniff delimiter as ',' [2019-10-21:20:18:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:10:INFO] Sniff delimiter as ',' [2019-10-21:20:18:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:10:INFO] Sniff delimiter as ',' [2019-10-21:20:18:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:10:INFO] Sniff delimiter as ',' [2019-10-21:20:18:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:10:INFO] Sniff delimiter as ',' [2019-10-21:20:18:10:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:12:INFO] Sniff delimiter as ',' [2019-10-21:20:18:12:INFO] Sniff delimiter as ',' [2019-10-21:20:18:12:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:12:INFO] Sniff delimiter as ',' [2019-10-21:20:18:12:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:12:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:12:INFO] Sniff delimiter as ',' [2019-10-21:20:18:12:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:12:INFO] Sniff delimiter as ',' [2019-10-21:20:18:12:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:13:INFO] Sniff delimiter as ',' [2019-10-21:20:18:13:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:12:INFO] Sniff delimiter as ',' [2019-10-21:20:18:12:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:13:INFO] Sniff delimiter as ',' [2019-10-21:20:18:13:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:14:INFO] Sniff delimiter as ',' [2019-10-21:20:18:14:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:14:INFO] Sniff delimiter as ',' [2019-10-21:20:18:14:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:15:INFO] Sniff delimiter as ',' [2019-10-21:20:18:15:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:14:INFO] Sniff delimiter as ',' [2019-10-21:20:18:14:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:14:INFO] Sniff delimiter as ',' [2019-10-21:20:18:14:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:15:INFO] Sniff delimiter as ',' [2019-10-21:20:18:15:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:15:INFO] Sniff delimiter as ',' [2019-10-21:20:18:15:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:15:INFO] Sniff delimiter as ',' [2019-10-21:20:18:15:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:17:INFO] Sniff delimiter as ',' [2019-10-21:20:18:17:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:17:INFO] Sniff delimiter as ',' [2019-10-21:20:18:17:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:17:INFO] Sniff delimiter as ',' [2019-10-21:20:18:17:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:17:INFO] Sniff delimiter as ',' [2019-10-21:20:18:17:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:17:INFO] Sniff delimiter as ',' [2019-10-21:20:18:17:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:17:INFO] Sniff delimiter as ',' [2019-10-21:20:18:17:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:17:INFO] Sniff delimiter as ',' [2019-10-21:20:18:17:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:18:17:INFO] Sniff delimiter as ',' [2019-10-21:20:18:17:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/368.9 KiB (4.3 MiB/s) with 1 file(s) remaining Completed 368.9 KiB/368.9 KiB (6.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-central-1-668160054588/xgboost-2019-10-21-20-14-41-158/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2019-10-21-19-39-11-550 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['inspir', 'least', 'littl', 'ivi', 'benson', 'girl', 'orchestra', 'perform', 'throughout', 'war', 'year', 'covent', 'garden', 'opera', 'hous', 'film', 'chronicl', 'attempt', 'elderli', 'saxophon', 'player', 'reform', 'almost', 'girl', 'band', 'play', 'schoolgirl', 'toward', 'end', 'wwii', 'brief', 'flashback', 'origin', 'band', 'stage', 'bring', 'us', 'wonder', 'music', 'help', 'fill', 'background', 'band', 'member', 'particular', 'girl', 'relationship', 'lone', 'male', 'member', 'transvestit', 'drummer', 'tri', 'dodg', 'call', 'ian', 'holm', 'lord', 'ring', 'cromwel', 'fairfax', 'judi', 'dench', 'turn', 'superb', 'lead', 'perform', 'recent', 'widow', 'elizabeth', 'conniv', 'woman', 'patrick', 'drummer', 'late', 'joan', 'sim', 'perfect', 'band', 'leader', 'play', 'bar', 'piano', 'sea', 'side', 'june', 'whitfield', 'glow', 'salvat', 'armi', 'trombon', 'player', 'cameo', 'appear', 'great', 'like', 'cleo', 'lain', 'lesli', 'caron', 'olympia', 'dukaki', 'billi', 'whitelaw', 'make', 'unforgett', 'experi', 'movi', 'romp', 'memori', 'lane', 'star', 'cast', 'ought', 'right', 'bunch', 'hill', 'actress', 'say', 'hope', 'look', 'good', 'age', 'lesli', 'caron', 'particular', 'still', 'incred', 'fox', '69', 'year', 'age', 'certainli', 'still', 'get', 'puls', 'go', 'watch', 'mental', 'berat', 'cast', 'director', 'use', 'women', 'appropri', 'age', 'afterward', 'look', 'girl', 'discov', 'everi', 'one', 'old', 'enough', 'perform', 'london', '1944', 'although', 'might', 'bit', 'stretch', 'judi', 'dench', 'like', 'swing', 'band', 'thrive', 'nostalgia', 'want', 'see', 'good', 'woman', 'manag', 'look', 'almost', 'three', 'quarter', 'centuri', 'behind', 'miss', 'film', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'spill', 'victorian', 'reincarn', 'ghetto', 'playboy', 'weari', '21st'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'masterson', 'omin', 'banana', 'orchestr', 'dubiou', 'optimist', 'sophi'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2019-10-21 20:44:55 Starting - Starting the training job... 2019-10-21 20:44:57 Starting - Launching requested ML instances... 2019-10-21 20:45:55 Starting - Preparing the instances for training...... 2019-10-21 20:46:55 Downloading - Downloading input data...... 2019-10-21 20:47:46 Training - Training image download completed. Training in progress.Arguments: train [2019-10-21:20:47:47:INFO] Running standalone xgboost training. [2019-10-21:20:47:47:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8610.41mb [2019-10-21:20:47:47:INFO] Determined delimiter of CSV input is ',' [20:47:47] S3DistributionType set as FullyReplicated [20:47:48] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2019-10-21:20:47:48:INFO] Determined delimiter of CSV input is ',' [20:47:48] S3DistributionType set as FullyReplicated [20:47:49] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [20:47:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.300333#011validation-error:0.3096 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [20:47:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.298#011validation-error:0.3073 [20:47:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 0 pruned nodes, max_depth=5 [2]#011train-error:0.279467#011validation-error:0.287 [20:47:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [3]#011train-error:0.277933#011validation-error:0.2858 [20:47:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.274867#011validation-error:0.2852 [20:48:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.274867#011validation-error:0.2831 [20:48:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.260733#011validation-error:0.2718 [20:48:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.2562#011validation-error:0.2696 [20:48:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.252067#011validation-error:0.2663 [20:48:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [9]#011train-error:0.241333#011validation-error:0.2545 [20:48:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.2376#011validation-error:0.2488 [20:48:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5 [11]#011train-error:0.233333#011validation-error:0.2468 [20:48:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5 [12]#011train-error:0.229467#011validation-error:0.2418 [20:48:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [13]#011train-error:0.227267#011validation-error:0.2371 [20:48:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.222733#011validation-error:0.2351 [20:48:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.2182#011validation-error:0.2305 [20:48:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [16]#011train-error:0.216533#011validation-error:0.229 [20:48:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [17]#011train-error:0.212133#011validation-error:0.2247 [20:48:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [18]#011train-error:0.209867#011validation-error:0.2233 [20:48:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [19]#011train-error:0.2078#011validation-error:0.2213 [20:48:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [20]#011train-error:0.2048#011validation-error:0.2208 [20:48:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.203067#011validation-error:0.2183 [20:48:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [22]#011train-error:0.200733#011validation-error:0.217 [20:48:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.197933#011validation-error:0.2169 [20:48:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [24]#011train-error:0.1956#011validation-error:0.2152 [20:48:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.192#011validation-error:0.2128 [20:48:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [26]#011train-error:0.190933#011validation-error:0.2121 [20:48:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [27]#011train-error:0.188267#011validation-error:0.2102 [20:48:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [28]#011train-error:0.1866#011validation-error:0.2075 [20:48:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.184733#011validation-error:0.2079 [20:48:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [30]#011train-error:0.1838#011validation-error:0.2072 [20:48:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [31]#011train-error:0.1812#011validation-error:0.2031 [20:48:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [32]#011train-error:0.1782#011validation-error:0.2026 [20:48:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.1766#011validation-error:0.2011 [20:48:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [34]#011train-error:0.175733#011validation-error:0.1994 [20:48:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [35]#011train-error:0.1744#011validation-error:0.1985 [20:48:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.171933#011validation-error:0.1979 [20:48:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 12 pruned nodes, max_depth=5 [37]#011train-error:0.170267#011validation-error:0.1969 [20:48:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [38]#011train-error:0.168533#011validation-error:0.1965 [20:48:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.167533#011validation-error:0.1955 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ....................Arguments: serve [2019-10-21 20:53:23 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2019-10-21 20:53:23 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) Arguments: serve [2019-10-21 20:53:23 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2019-10-21 20:53:23 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2019-10-21 20:53:23 +0000] [1] [INFO] Using worker: gevent [2019-10-21 20:53:23 +0000] [39] [INFO] Booting worker with pid: 39 [2019-10-21 20:53:23 +0000] [40] [INFO] Booting worker with pid: 40 [2019-10-21 20:53:23 +0000] [1] [INFO] Using worker: gevent [2019-10-21 20:53:23 +0000] [39] [INFO] Booting worker with pid: 39 [2019-10-21 20:53:23 +0000] [40] [INFO] Booting worker with pid: 40 [2019-10-21 20:53:23 +0000] [41] [INFO] Booting worker with pid: 41 [2019-10-21 20:53:23 +0000] [42] [INFO] Booting worker with pid: 42 [2019-10-21:20:53:23:INFO] Model loaded successfully for worker : 40 [2019-10-21:20:53:23:INFO] Model loaded successfully for worker : 39 [2019-10-21:20:53:23:INFO] Model loaded successfully for worker : 41 [2019-10-21:20:53:23:INFO] Model loaded successfully for worker : 42 [2019-10-21 20:53:23 +0000] [41] [INFO] Booting worker with pid: 41 [2019-10-21 20:53:23 +0000] [42] [INFO] Booting worker with pid: 42 [2019-10-21:20:53:23:INFO] Model loaded successfully for worker : 40 [2019-10-21:20:53:23:INFO] Model loaded successfully for worker : 39 [2019-10-21:20:53:23:INFO] Model loaded successfully for worker : 41 [2019-10-21:20:53:23:INFO] Model loaded successfully for worker : 42 2019-10-21T20:53:28.062:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2019-10-21:20:53:30:INFO] Sniff delimiter as ',' [2019-10-21:20:53:30:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:30:INFO] Sniff delimiter as ',' [2019-10-21:20:53:30:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:30:INFO] Sniff delimiter as ',' [2019-10-21:20:53:30:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:31:INFO] Sniff delimiter as ',' [2019-10-21:20:53:31:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:31:INFO] Sniff delimiter as ',' [2019-10-21:20:53:31:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:30:INFO] Sniff delimiter as ',' [2019-10-21:20:53:30:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:31:INFO] Sniff delimiter as ',' [2019-10-21:20:53:31:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:31:INFO] Sniff delimiter as ',' [2019-10-21:20:53:31:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:33:INFO] Sniff delimiter as ',' [2019-10-21:20:53:33:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:33:INFO] Sniff delimiter as ',' [2019-10-21:20:53:33:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:33:INFO] Sniff delimiter as ',' [2019-10-21:20:53:33:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:33:INFO] Sniff delimiter as ',' [2019-10-21:20:53:33:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:33:INFO] Sniff delimiter as ',' [2019-10-21:20:53:33:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:33:INFO] Sniff delimiter as ',' [2019-10-21:20:53:33:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:33:INFO] Sniff delimiter as ',' [2019-10-21:20:53:33:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:33:INFO] Sniff delimiter as ',' [2019-10-21:20:53:33:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:35:INFO] Sniff delimiter as ',' [2019-10-21:20:53:35:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:35:INFO] Sniff delimiter as ',' [2019-10-21:20:53:35:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:36:INFO] Sniff delimiter as ',' [2019-10-21:20:53:36:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:36:INFO] Sniff delimiter as ',' [2019-10-21:20:53:36:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:36:INFO] Sniff delimiter as ',' [2019-10-21:20:53:36:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:36:INFO] Sniff delimiter as ',' [2019-10-21:20:53:36:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:37:INFO] Sniff delimiter as ',' [2019-10-21:20:53:37:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:38:INFO] Sniff delimiter as ',' [2019-10-21:20:53:38:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:37:INFO] Sniff delimiter as ',' [2019-10-21:20:53:37:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:38:INFO] Sniff delimiter as ',' [2019-10-21:20:53:38:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:38:INFO] Sniff delimiter as ',' [2019-10-21:20:53:38:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:38:INFO] Sniff delimiter as ',' [2019-10-21:20:53:38:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:38:INFO] Sniff delimiter as ',' [2019-10-21:20:53:38:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:38:INFO] Sniff delimiter as ',' [2019-10-21:20:53:38:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:40:INFO] Sniff delimiter as ',' [2019-10-21:20:53:40:INFO] Sniff delimiter as ',' [2019-10-21:20:53:40:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:40:INFO] Sniff delimiter as ',' [2019-10-21:20:53:40:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:40:INFO] Sniff delimiter as ',' [2019-10-21:20:53:40:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:40:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:40:INFO] Sniff delimiter as ',' [2019-10-21:20:53:40:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:40:INFO] Sniff delimiter as ',' [2019-10-21:20:53:40:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:41:INFO] Sniff delimiter as ',' [2019-10-21:20:53:41:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:41:INFO] Sniff delimiter as ',' [2019-10-21:20:53:41:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:42:INFO] Sniff delimiter as ',' [2019-10-21:20:53:42:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:42:INFO] Sniff delimiter as ',' [2019-10-21:20:53:42:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:42:INFO] Sniff delimiter as ',' [2019-10-21:20:53:42:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:43:INFO] Sniff delimiter as ',' [2019-10-21:20:53:43:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:43:INFO] Sniff delimiter as ',' [2019-10-21:20:53:43:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:42:INFO] Sniff delimiter as ',' [2019-10-21:20:53:42:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:43:INFO] Sniff delimiter as ',' [2019-10-21:20:53:43:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:43:INFO] Sniff delimiter as ',' [2019-10-21:20:53:43:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:46:INFO] Sniff delimiter as ',' [2019-10-21:20:53:46:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:46:INFO] Sniff delimiter as ',' [2019-10-21:20:53:46:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:47:INFO] Sniff delimiter as ',' [2019-10-21:20:53:47:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:47:INFO] Sniff delimiter as ',' [2019-10-21:20:53:47:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:47:INFO] Sniff delimiter as ',' [2019-10-21:20:53:47:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:47:INFO] Sniff delimiter as ',' [2019-10-21:20:53:47:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:47:INFO] Sniff delimiter as ',' [2019-10-21:20:53:47:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:48:INFO] Sniff delimiter as ',' [2019-10-21:20:53:48:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:47:INFO] Sniff delimiter as ',' [2019-10-21:20:53:47:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:48:INFO] Sniff delimiter as ',' [2019-10-21:20:53:48:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:49:INFO] Sniff delimiter as ',' [2019-10-21:20:53:49:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:49:INFO] Sniff delimiter as ',' [2019-10-21:20:53:49:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:49:INFO] Sniff delimiter as ',' [2019-10-21:20:53:49:INFO] Sniff delimiter as ',' [2019-10-21:20:53:49:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:50:INFO] Sniff delimiter as ',' [2019-10-21:20:53:50:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:50:INFO] Sniff delimiter as ',' [2019-10-21:20:53:50:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:49:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:50:INFO] Sniff delimiter as ',' [2019-10-21:20:53:50:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:50:INFO] Sniff delimiter as ',' [2019-10-21:20:53:50:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:52:INFO] Sniff delimiter as ',' [2019-10-21:20:53:52:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:52:INFO] Sniff delimiter as ',' [2019-10-21:20:53:52:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:52:INFO] Sniff delimiter as ',' [2019-10-21:20:53:52:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:52:INFO] Sniff delimiter as ',' [2019-10-21:20:53:52:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:52:INFO] Sniff delimiter as ',' [2019-10-21:20:53:52:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:52:INFO] Sniff delimiter as ',' [2019-10-21:20:53:52:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:53:INFO] Sniff delimiter as ',' [2019-10-21:20:53:53:INFO] Determined delimiter of CSV input is ',' [2019-10-21:20:53:53:INFO] Sniff delimiter as ',' [2019-10-21:20:53:53:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.1 KiB (3.2 MiB/s) with 1 file(s) remaining Completed 366.1 KiB/366.1 KiB (4.4 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-central-1-668160054588/xgboost-2019-10-21-20-50-09-021/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name mod_name = new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "imdb-xgb-enpoint-config" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": mod_name, "VariantName": "AllTraffic" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code endpoint_name = 'xgboost-2019-10-21-19-39-11-550' # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=endpoint_name, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output -----------------------------------------------------------------------------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output _____no_output_____ ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Wrote preprocessed data to cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer # from sklearn.externals import joblib import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-09-15 22:24:57 Starting - Starting the training job... 2020-09-15 22:24:58 Starting - Launching requested ML instances...... 2020-09-15 22:26:06 Starting - Preparing the instances for training...... 2020-09-15 22:26:59 Downloading - Downloading input data... 2020-09-15 22:27:52 Training - Training image download completed. Training in progress..Arguments: train [2020-09-15:22:27:52:INFO] Running standalone xgboost training. [2020-09-15:22:27:52:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8484.52mb [2020-09-15:22:27:52:INFO] Determined delimiter of CSV input is ',' [22:27:52] S3DistributionType set as FullyReplicated [22:27:54] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-09-15:22:27:54:INFO] Determined delimiter of CSV input is ',' [22:27:54] S3DistributionType set as FullyReplicated [22:27:55] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [22:27:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.300133#011validation-error:0.2935 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [22:28:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5 [1]#011train-error:0.284067#011validation-error:0.2798 [22:28:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [2]#011train-error:0.282267#011validation-error:0.2794 [22:28:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.279267#011validation-error:0.2794 [22:28:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 12 pruned nodes, max_depth=5 [4]#011train-error:0.2662#011validation-error:0.2658 [22:28:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.259#011validation-error:0.2588 [22:28:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.2522#011validation-error:0.2501 [22:28:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.236467#011validation-error:0.2381 [22:28:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [8]#011train-error:0.229933#011validation-error:0.2317 [22:28:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [9]#011train-error:0.2238#011validation-error:0.2265 [22:28:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [10]#011train-error:0.220333#011validation-error:0.2244 [22:28:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.213667#011validation-error:0.2216 [22:28:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.210467#011validation-error:0.2202 [22:28:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.204933#011validation-error:0.2162 [22:28:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [14]#011train-error:0.200933#011validation-error:0.212 [22:28:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.1972#011validation-error:0.2099 [22:28:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [16]#011train-error:0.194933#011validation-error:0.207 [22:28:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [17]#011train-error:0.192733#011validation-error:0.2048 [22:28:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.191933#011validation-error:0.2035 [22:28:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5 [19]#011train-error:0.188#011validation-error:0.1996 [22:28:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [20]#011train-error:0.1844#011validation-error:0.1999 [22:28:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.182533#011validation-error:0.1974 [22:28:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.180667#011validation-error:0.1969 [22:28:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.179867#011validation-error:0.1926 [22:28:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.178933#011validation-error:0.1918 [22:28:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.177133#011validation-error:0.1894 [22:28:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [26]#011train-error:0.173733#011validation-error:0.1884 [22:28:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [27]#011train-error:0.1718#011validation-error:0.186 [22:28:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [28]#011train-error:0.168933#011validation-error:0.1871 [22:28:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [29]#011train-error:0.1686#011validation-error:0.1872 [22:28:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [30]#011train-error:0.1654#011validation-error:0.1838 [22:28:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 0 pruned nodes, max_depth=5 [31]#011train-error:0.163867#011validation-error:0.1822 [22:28:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [32]#011train-error:0.162533#011validation-error:0.1825 [22:28:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [33]#011train-error:0.161533#011validation-error:0.1816 [22:28:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [34]#011train-error:0.159533#011validation-error:0.1806 [22:28:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 16 pruned nodes, max_depth=5 [35]#011train-error:0.1586#011validation-error:0.1795 [22:28:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.156933#011validation-error:0.1765 [22:28:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.157733#011validation-error:0.1748 [22:28:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 14 pruned nodes, max_depth=5 [38]#011train-error:0.155533#011validation-error:0.1755 [22:28:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.1538#011validation-error:0.1754 [22:28:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [40]#011train-error:0.152667#011validation-error:0.1734 [22:28:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [41]#011train-error:0.150467#011validation-error:0.1727 [22:28:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [42]#011train-error:0.1504#011validation-error:0.1713 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ............................2020-09-15T22:43:12.508:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-09-15 22:43:12 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-15 22:43:12 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-15 22:43:12 +0000] [1] [INFO] Using worker: gevent [2020-09-15 22:43:12 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-15 22:43:12 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-15:22:43:12:INFO] Model loaded successfully for worker : 37 [2020-09-15 22:43:12 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-15 22:43:12 +0000] [40] [INFO] Booting worker with pid: 40 [2020-09-15:22:43:12:INFO] Model loaded successfully for worker : 38 [2020-09-15:22:43:12:INFO] Model loaded successfully for worker : 39 [2020-09-15:22:43:12:INFO] Model loaded successfully for worker : 40 [2020-09-15:22:43:12:INFO] Sniff delimiter as ',' [2020-09-15:22:43:12:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:12:INFO] Sniff delimiter as ',' [2020-09-15:22:43:12:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:12:INFO] Sniff delimiter as ',' [2020-09-15:22:43:12:INFO] Determined delimiter of CSV input is ',' Arguments: serve [2020-09-15 22:43:12 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-15 22:43:12 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-15 22:43:12 +0000] [1] [INFO] Using worker: gevent [2020-09-15 22:43:12 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-15 22:43:12 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-15:22:43:12:INFO] Model loaded successfully for worker : 37 [2020-09-15 22:43:12 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-15 22:43:12 +0000] [40] [INFO] Booting worker with pid: 40 [2020-09-15:22:43:12:INFO] Model loaded successfully for worker : 38 [2020-09-15:22:43:12:INFO] Model loaded successfully for worker : 39 [2020-09-15:22:43:12:INFO] Model loaded successfully for worker : 40 [2020-09-15:22:43:12:INFO] Sniff delimiter as ',' [2020-09-15:22:43:12:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:12:INFO] Sniff delimiter as ',' [2020-09-15:22:43:12:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:12:INFO] Sniff delimiter as ',' [2020-09-15:22:43:12:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:14:INFO] Sniff delimiter as ',' [2020-09-15:22:43:14:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:15:INFO] Sniff delimiter as ',' [2020-09-15:22:43:15:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:15:INFO] Sniff delimiter as ',' [2020-09-15:22:43:15:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:15:INFO] Sniff delimiter as ',' [2020-09-15:22:43:15:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:14:INFO] Sniff delimiter as ',' [2020-09-15:22:43:14:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:15:INFO] Sniff delimiter as ',' [2020-09-15:22:43:15:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:15:INFO] Sniff delimiter as ',' [2020-09-15:22:43:15:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:15:INFO] Sniff delimiter as ',' [2020-09-15:22:43:15:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:16:INFO] Sniff delimiter as ',' [2020-09-15:22:43:16:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:16:INFO] Sniff delimiter as ',' [2020-09-15:22:43:16:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:17:INFO] Sniff delimiter as ',' [2020-09-15:22:43:17:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:17:INFO] Sniff delimiter as ',' [2020-09-15:22:43:17:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:17:INFO] Sniff delimiter as ',' [2020-09-15:22:43:17:INFO] Sniff delimiter as ',' [2020-09-15:22:43:17:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:17:INFO] Sniff delimiter as ',' [2020-09-15:22:43:17:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:17:INFO] Sniff delimiter as ',' [2020-09-15:22:43:17:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:17:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:19:INFO] Sniff delimiter as ',' [2020-09-15:22:43:19:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:19:INFO] Sniff delimiter as ',' [2020-09-15:22:43:19:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:19:INFO] Sniff delimiter as ',' [2020-09-15:22:43:19:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:19:INFO] Sniff delimiter as ',' [2020-09-15:22:43:19:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:20:INFO] Sniff delimiter as ',' [2020-09-15:22:43:20:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:20:INFO] Sniff delimiter as ',' [2020-09-15:22:43:20:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:20:INFO] Sniff delimiter as ',' [2020-09-15:22:43:20:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:20:INFO] Sniff delimiter as ',' [2020-09-15:22:43:20:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:21:INFO] Sniff delimiter as ',' [2020-09-15:22:43:21:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:21:INFO] Sniff delimiter as ',' [2020-09-15:22:43:21:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:22:INFO] Sniff delimiter as ',' [2020-09-15:22:43:22:INFO] Sniff delimiter as ',' [2020-09-15:22:43:22:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:22:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:23:INFO] Sniff delimiter as ',' [2020-09-15:22:43:23:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:23:INFO] Sniff delimiter as ',' [2020-09-15:22:43:23:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:23:INFO] Sniff delimiter as ',' [2020-09-15:22:43:23:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:23:INFO] Sniff delimiter as ',' [2020-09-15:22:43:23:INFO] Sniff delimiter as ',' [2020-09-15:22:43:23:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:23:INFO] Sniff delimiter as ',' [2020-09-15:22:43:23:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:23:INFO] Sniff delimiter as ',' [2020-09-15:22:43:23:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:23:INFO] Sniff delimiter as ',' [2020-09-15:22:43:23:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:23:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:26:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:26:INFO] Sniff delimiter as ',' [2020-09-15:22:43:26:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:26:INFO] Sniff delimiter as ',' [2020-09-15:22:43:26:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:26:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:26:INFO] Sniff delimiter as ',' [2020-09-15:22:43:26:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:26:INFO] Sniff delimiter as ',' [2020-09-15:22:43:26:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:26:INFO] Sniff delimiter as ',' [2020-09-15:22:43:26:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:26:INFO] Sniff delimiter as ',' [2020-09-15:22:43:26:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:28:INFO] Sniff delimiter as ',' [2020-09-15:22:43:28:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:28:INFO] Sniff delimiter as ',' [2020-09-15:22:43:28:INFO] Sniff delimiter as ',' [2020-09-15:22:43:28:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:28:INFO] Sniff delimiter as ',' [2020-09-15:22:43:28:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:28:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:28:INFO] Sniff delimiter as ',' [2020-09-15:22:43:28:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:28:INFO] Sniff delimiter as ',' [2020-09-15:22:43:28:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:28:INFO] Sniff delimiter as ',' [2020-09-15:22:43:28:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:28:INFO] Sniff delimiter as ',' [2020-09-15:22:43:28:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:30:INFO] Sniff delimiter as ',' [2020-09-15:22:43:30:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:30:INFO] Sniff delimiter as ',' [2020-09-15:22:43:30:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:31:INFO] Sniff delimiter as ',' [2020-09-15:22:43:31:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:31:INFO] Sniff delimiter as ',' [2020-09-15:22:43:31:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:30:INFO] Sniff delimiter as ',' [2020-09-15:22:43:30:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:30:INFO] Sniff delimiter as ',' [2020-09-15:22:43:30:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:31:INFO] Sniff delimiter as ',' [2020-09-15:22:43:31:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:31:INFO] Sniff delimiter as ',' [2020-09-15:22:43:31:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:33:INFO] Sniff delimiter as ',' [2020-09-15:22:43:33:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:33:INFO] Sniff delimiter as ',' [2020-09-15:22:43:33:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:33:INFO] Sniff delimiter as ',' [2020-09-15:22:43:33:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:33:INFO] Sniff delimiter as ',' [2020-09-15:22:43:33:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:33:INFO] Sniff delimiter as ',' [2020-09-15:22:43:33:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:33:INFO] Sniff delimiter as ',' [2020-09-15:22:43:33:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:33:INFO] Sniff delimiter as ',' [2020-09-15:22:43:33:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:33:INFO] Sniff delimiter as ',' [2020-09-15:22:43:33:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:35:INFO] Sniff delimiter as ',' [2020-09-15:22:43:35:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:43:35:INFO] Sniff delimiter as ',' [2020-09-15:22:43:35:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/368.9 KiB (3.2 MiB/s) with 1 file(s) remaining Completed 368.9 KiB/368.9 KiB (4.6 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-444100773610/xgboost-2020-09-15-22-38-47-933/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary = vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output .............................Arguments: serve Arguments: serve [2020-09-15 22:51:12 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-15 22:51:12 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-15 22:51:12 +0000] [1] [INFO] Using worker: gevent [2020-09-15 22:51:12 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-15 22:51:12 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-15 22:51:12 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-15:22:51:12:INFO] Model loaded successfully for worker : 37 [2020-09-15:22:51:13:INFO] Model loaded successfully for worker : 38 [2020-09-15 22:51:13 +0000] [40] [INFO] Booting worker with pid: 40 [2020-09-15:22:51:13:INFO] Model loaded successfully for worker : 39 [2020-09-15 22:51:12 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-15 22:51:12 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-15 22:51:12 +0000] [1] [INFO] Using worker: gevent [2020-09-15 22:51:12 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-15 22:51:12 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-15 22:51:12 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-15:22:51:12:INFO] Model loaded successfully for worker : 37 [2020-09-15:22:51:13:INFO] Model loaded successfully for worker : 38 [2020-09-15 22:51:13 +0000] [40] [INFO] Booting worker with pid: 40 [2020-09-15:22:51:13:INFO] Model loaded successfully for worker : 39 [2020-09-15:22:51:13:INFO] Model loaded successfully for worker : 40 [2020-09-15:22:51:13:INFO] Sniff delimiter as ',' [2020-09-15:22:51:13:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:13:INFO] Sniff delimiter as ',' [2020-09-15:22:51:13:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:13:INFO] Sniff delimiter as ',' [2020-09-15:22:51:13:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:13:INFO] Model loaded successfully for worker : 40 [2020-09-15:22:51:13:INFO] Sniff delimiter as ',' [2020-09-15:22:51:13:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:13:INFO] Sniff delimiter as ',' [2020-09-15:22:51:13:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:13:INFO] Sniff delimiter as ',' [2020-09-15:22:51:13:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:15:INFO] Sniff delimiter as ',' [2020-09-15:22:51:15:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:15:INFO] Sniff delimiter as ',' [2020-09-15:22:51:15:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:15:INFO] Sniff delimiter as ',' [2020-09-15:22:51:15:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:15:INFO] Sniff delimiter as ',' [2020-09-15:22:51:15:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:15:INFO] Sniff delimiter as ',' [2020-09-15:22:51:15:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:15:INFO] Sniff delimiter as ',' [2020-09-15:22:51:15:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:17:INFO] Sniff delimiter as ',' [2020-09-15:22:51:17:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:17:INFO] Sniff delimiter as ',' [2020-09-15:22:51:17:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:17:INFO] Sniff delimiter as ',' [2020-09-15:22:51:17:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:17:INFO] Sniff delimiter as ',' [2020-09-15:22:51:17:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:17:INFO] Sniff delimiter as ',' [2020-09-15:22:51:17:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:17:INFO] Sniff delimiter as ',' [2020-09-15:22:51:17:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:17:INFO] Sniff delimiter as ',' [2020-09-15:22:51:17:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:17:INFO] Sniff delimiter as ',' [2020-09-15:22:51:17:INFO] Determined delimiter of CSV input is ',' 2020-09-15T22:51:13.003:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-09-15:22:51:19:INFO] Sniff delimiter as ',' [2020-09-15:22:51:19:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:19:INFO] Sniff delimiter as ',' [2020-09-15:22:51:19:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:19:INFO] Sniff delimiter as ',' [2020-09-15:22:51:19:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:19:INFO] Sniff delimiter as ',' [2020-09-15:22:51:19:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:20:INFO] Sniff delimiter as ',' [2020-09-15:22:51:20:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:20:INFO] Sniff delimiter as ',' [2020-09-15:22:51:20:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:20:INFO] Sniff delimiter as ',' [2020-09-15:22:51:20:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:20:INFO] Sniff delimiter as ',' [2020-09-15:22:51:20:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:21:INFO] Sniff delimiter as ',' [2020-09-15:22:51:21:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:22:INFO] Sniff delimiter as ',' [2020-09-15:22:51:22:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:22:INFO] Sniff delimiter as ',' [2020-09-15:22:51:22:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:22:INFO] Sniff delimiter as ',' [2020-09-15:22:51:22:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:21:INFO] Sniff delimiter as ',' [2020-09-15:22:51:21:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:22:INFO] Sniff delimiter as ',' [2020-09-15:22:51:22:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:22:INFO] Sniff delimiter as ',' [2020-09-15:22:51:22:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:22:INFO] Sniff delimiter as ',' [2020-09-15:22:51:22:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:24:INFO] Sniff delimiter as ',' [2020-09-15:22:51:24:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:24:INFO] Sniff delimiter as ',' [2020-09-15:22:51:24:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:24:INFO] Sniff delimiter as ',' [2020-09-15:22:51:24:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:24:INFO] Sniff delimiter as ',' [2020-09-15:22:51:24:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:24:INFO] Sniff delimiter as ',' [2020-09-15:22:51:24:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:24:INFO] Sniff delimiter as ',' [2020-09-15:22:51:24:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:24:INFO] Sniff delimiter as ',' [2020-09-15:22:51:24:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:24:INFO] Sniff delimiter as ',' [2020-09-15:22:51:24:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:26:INFO] Sniff delimiter as ',' [2020-09-15:22:51:26:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:26:INFO] Sniff delimiter as ',' [2020-09-15:22:51:26:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:27:INFO] Sniff delimiter as ',' [2020-09-15:22:51:27:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:27:INFO] Sniff delimiter as ',' [2020-09-15:22:51:27:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:27:INFO] Sniff delimiter as ',' [2020-09-15:22:51:27:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:27:INFO] Sniff delimiter as ',' [2020-09-15:22:51:27:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:29:INFO] Sniff delimiter as ',' [2020-09-15:22:51:29:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:29:INFO] Sniff delimiter as ',' [2020-09-15:22:51:29:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:29:INFO] Sniff delimiter as ',' [2020-09-15:22:51:29:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:29:INFO] Sniff delimiter as ',' [2020-09-15:22:51:29:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:29:INFO] Sniff delimiter as ',' [2020-09-15:22:51:29:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:29:INFO] Sniff delimiter as ',' [2020-09-15:22:51:29:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:29:INFO] Sniff delimiter as ',' [2020-09-15:22:51:29:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:29:INFO] Sniff delimiter as ',' [2020-09-15:22:51:29:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:31:INFO] Sniff delimiter as ',' [2020-09-15:22:51:31:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:31:INFO] Sniff delimiter as ',' [2020-09-15:22:51:31:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:31:INFO] Sniff delimiter as ',' [2020-09-15:22:51:31:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:31:INFO] Sniff delimiter as ',' [2020-09-15:22:51:31:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:32:INFO] Sniff delimiter as ',' [2020-09-15:22:51:32:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:32:INFO] Sniff delimiter as ',' [2020-09-15:22:51:32:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:32:INFO] Sniff delimiter as ',' [2020-09-15:22:51:32:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:32:INFO] Sniff delimiter as ',' [2020-09-15:22:51:32:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:34:INFO] Sniff delimiter as ',' [2020-09-15:22:51:34:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:34:INFO] Sniff delimiter as ',' [2020-09-15:22:51:34:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:34:INFO] Sniff delimiter as ',' [2020-09-15:22:51:34:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:34:INFO] Sniff delimiter as ',' [2020-09-15:22:51:34:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:34:INFO] Sniff delimiter as ',' [2020-09-15:22:51:34:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:34:INFO] Sniff delimiter as ',' [2020-09-15:22:51:34:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:34:INFO] Sniff delimiter as ',' [2020-09-15:22:51:34:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:34:INFO] Sniff delimiter as ',' [2020-09-15:22:51:34:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:36:INFO] Sniff delimiter as ',' [2020-09-15:22:51:36:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:36:INFO] Sniff delimiter as ',' [2020-09-15:22:51:36:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:36:INFO] Sniff delimiter as ',' [2020-09-15:22:51:36:INFO] Determined delimiter of CSV input is ',' [2020-09-15:22:51:36:INFO] Sniff delimiter as ',' [2020-09-15:22:51:36:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.1 KiB (2.0 MiB/s) with 1 file(s) remaining Completed 369.1 KiB/369.1 KiB (2.9 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-444100773610/xgboost-2020-09-15-22-46-28-174/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2020-09-15-22-24-56-899 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['yowsa', 'realli', 'want', 'action', 'check', 'babe', 'bomb', 'non', 'stop', 'thriller', 'veteran', 'star', 'martin', 'sheen', 'lead', 'trio', 'supermodel', 'mission', 'stop', 'nuclear', 'terror', 'director', 'dean', 'hamilton', 'let', 'heavi', 'plotlin', 'get', 'way', 'massiv', 'dose', 'teensi', 'swimsuit', 'scene', 'jiggli', 'beach', 'jog', 'hubba', 'hubba', 'hot', 'tub', 'like', 'want', 'action', 'get', 'pearl', 'harbor', 'want', 'babe', 'get', 'eye', 'everi', 'two', 'minut', 'want', 'go', 'buy', 'video', 'yowsa', 'yowsa', 'yowsa', 'mighti', 'spici', 'meatbal'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:484: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'profil', 'rapidli', 'ghetto', 'spill', 'playboy', 'growth', 'turtl', 'bach', 'assort'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'dubiou', 'evan', 'substanti', 'bravo', 'ingrid', 'drone', 'banana', 'scariest', 'incorrect'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The image name of the training container role, # The IAM role to use (our current role in this case) train_instance_count=1, # The number of instances to use for training train_instance_type='ml.m4.xlarge', # The type of instance to use for training output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), # Where to save the output (the model artifacts) sagemaker_session=session) # The current SageMaker session # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, objective='binary:logistic', early_stopping_rounds=10, num_round=200) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-09-15 23:07:49 Starting - Starting the training job... 2020-09-15 23:07:50 Starting - Launching requested ML instances...... 2020-09-15 23:09:14 Starting - Preparing the instances for training......... 2020-09-15 23:10:31 Downloading - Downloading input data... 2020-09-15 23:11:05 Training - Downloading the training image..Arguments: train [2020-09-15:23:11:25:INFO] Running standalone xgboost training. [2020-09-15:23:11:25:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8485.76mb [2020-09-15:23:11:25:INFO] Determined delimiter of CSV input is ',' [23:11:25] S3DistributionType set as FullyReplicated [23:11:27] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-09-15:23:11:27:INFO] Determined delimiter of CSV input is ',' [23:11:27] S3DistributionType set as FullyReplicated [23:11:29] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [23:11:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 10 pruned nodes, max_depth=5 [0]#011train-error:0.314#011validation-error:0.3096 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [23:11:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.3008#011validation-error:0.2957 [23:11:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.287667#011validation-error:0.2815 [23:11:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.285067#011validation-error:0.2818 2020-09-15 23:11:25 Training - Training image download completed. Training in progress.[23:11:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [4]#011train-error:0.276667#011validation-error:0.2732 [23:11:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.258533#011validation-error:0.26 [23:11:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 8 pruned nodes, max_depth=5 [6]#011train-error:0.253867#011validation-error:0.2558 [23:11:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [7]#011train-error:0.248867#011validation-error:0.2522 [23:11:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [8]#011train-error:0.243867#011validation-error:0.248 [23:11:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 2 pruned nodes, max_depth=5 [9]#011train-error:0.238333#011validation-error:0.2407 [23:11:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [10]#011train-error:0.229933#011validation-error:0.2373 [23:11:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [11]#011train-error:0.226867#011validation-error:0.2317 [23:11:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.224533#011validation-error:0.2295 [23:11:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.219667#011validation-error:0.2271 [23:11:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.2152#011validation-error:0.2206 [23:11:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [15]#011train-error:0.213667#011validation-error:0.2207 [23:11:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.2106#011validation-error:0.22 [23:11:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [17]#011train-error:0.205867#011validation-error:0.2175 [23:11:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.204133#011validation-error:0.2145 [23:11:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [19]#011train-error:0.201667#011validation-error:0.2114 [23:11:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [20]#011train-error:0.200467#011validation-error:0.2089 [23:11:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.197#011validation-error:0.209 [23:12:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.195467#011validation-error:0.2075 [23:12:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.1922#011validation-error:0.2066 [23:12:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.1904#011validation-error:0.2045 [23:12:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [25]#011train-error:0.188333#011validation-error:0.2016 [23:12:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [26]#011train-error:0.186467#011validation-error:0.2008 [23:12:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [27]#011train-error:0.184667#011validation-error:0.1997 [23:12:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [28]#011train-error:0.1812#011validation-error:0.1968 [23:12:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.181267#011validation-error:0.1964 [23:12:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.180333#011validation-error:0.1975 [23:12:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.178333#011validation-error:0.1977 [23:12:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.177667#011validation-error:0.1954 [23:12:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [33]#011train-error:0.175667#011validation-error:0.1948 [23:12:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [34]#011train-error:0.174333#011validation-error:0.1935 [23:12:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.174467#011validation-error:0.192 [23:12:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [36]#011train-error:0.172933#011validation-error:0.1921 [23:12:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [37]#011train-error:0.1716#011validation-error:0.1901 [23:12:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [38]#011train-error:0.169667#011validation-error:0.1902 [23:12:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5 [39]#011train-error:0.1692#011validation-error:0.1888 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output .............................Arguments: serve [2020-09-15 23:20:53 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-15 23:20:53 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-15 23:20:53 +0000] [1] [INFO] Using worker: gevent [2020-09-15 23:20:53 +0000] [36] [INFO] Booting worker with pid: 36 [2020-09-15 23:20:53 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-15:23:20:53:INFO] Model loaded successfully for worker : 36 [2020-09-15 23:20:53 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-15:23:20:53:INFO] Model loaded successfully for worker : 37 [2020-09-15 23:20:53 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-15:23:20:53:INFO] Model loaded successfully for worker : 38 [2020-09-15:23:20:54:INFO] Model loaded successfully for worker : 39 [2020-09-15:23:20:54:INFO] Sniff delimiter as ',' [2020-09-15:23:20:54:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:54:INFO] Sniff delimiter as ',' [2020-09-15:23:20:54:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:54:INFO] Sniff delimiter as ',' [2020-09-15:23:20:54:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:54:INFO] Sniff delimiter as ',' [2020-09-15:23:20:54:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:56:INFO] Sniff delimiter as ',' [2020-09-15:23:20:56:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:56:INFO] Sniff delimiter as ',' [2020-09-15:23:20:56:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:56:INFO] Sniff delimiter as ',' [2020-09-15:23:20:56:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:57:INFO] Sniff delimiter as ',' [2020-09-15:23:20:57:INFO] Determined delimiter of CSV input is ',' 2020-09-15T23:20:53.806:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-09-15:23:20:59:INFO] Sniff delimiter as ',' [2020-09-15:23:20:59:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:59:INFO] Sniff delimiter as ',' [2020-09-15:23:20:59:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:59:INFO] Sniff delimiter as ',' [2020-09-15:23:20:59:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:59:INFO] Sniff delimiter as ',' [2020-09-15:23:20:59:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:59:INFO] Sniff delimiter as ',' [2020-09-15:23:20:59:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:59:INFO] Sniff delimiter as ',' [2020-09-15:23:20:59:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:59:INFO] Sniff delimiter as ',' [2020-09-15:23:20:59:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:20:59:INFO] Sniff delimiter as ',' [2020-09-15:23:20:59:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:01:INFO] Sniff delimiter as ',' [2020-09-15:23:21:01:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:01:INFO] Sniff delimiter as ',' [2020-09-15:23:21:01:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:01:INFO] Sniff delimiter as ',' [2020-09-15:23:21:01:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:01:INFO] Sniff delimiter as ',' [2020-09-15:23:21:01:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:02:INFO] Sniff delimiter as ',' [2020-09-15:23:21:02:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:01:INFO] Sniff delimiter as ',' [2020-09-15:23:21:01:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:01:INFO] Sniff delimiter as ',' [2020-09-15:23:21:01:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:02:INFO] Sniff delimiter as ',' [2020-09-15:23:21:02:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:04:INFO] Sniff delimiter as ',' [2020-09-15:23:21:04:INFO] Sniff delimiter as ',' [2020-09-15:23:21:04:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:04:INFO] Sniff delimiter as ',' [2020-09-15:23:21:04:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:04:INFO] Sniff delimiter as ',' [2020-09-15:23:21:04:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:04:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:04:INFO] Sniff delimiter as ',' [2020-09-15:23:21:04:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:04:INFO] Sniff delimiter as ',' [2020-09-15:23:21:04:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:04:INFO] Sniff delimiter as ',' [2020-09-15:23:21:04:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:04:INFO] Sniff delimiter as ',' [2020-09-15:23:21:04:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:06:INFO] Sniff delimiter as ',' [2020-09-15:23:21:06:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:06:INFO] Sniff delimiter as ',' [2020-09-15:23:21:06:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:06:INFO] Sniff delimiter as ',' [2020-09-15:23:21:06:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:06:INFO] Sniff delimiter as ',' [2020-09-15:23:21:06:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:06:INFO] Sniff delimiter as ',' [2020-09-15:23:21:06:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:06:INFO] Sniff delimiter as ',' [2020-09-15:23:21:06:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:06:INFO] Sniff delimiter as ',' [2020-09-15:23:21:06:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:06:INFO] Sniff delimiter as ',' [2020-09-15:23:21:06:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:11:INFO] Sniff delimiter as ',' [2020-09-15:23:21:11:INFO] Sniff delimiter as ',' [2020-09-15:23:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:11:INFO] Sniff delimiter as ',' [2020-09-15:23:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:11:INFO] Sniff delimiter as ',' [2020-09-15:23:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:11:INFO] Sniff delimiter as ',' [2020-09-15:23:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:11:INFO] Sniff delimiter as ',' [2020-09-15:23:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:11:INFO] Sniff delimiter as ',' [2020-09-15:23:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:11:INFO] Sniff delimiter as ',' [2020-09-15:23:21:11:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:13:INFO] Sniff delimiter as ',' [2020-09-15:23:21:13:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:13:INFO] Sniff delimiter as ',' [2020-09-15:23:21:13:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:14:INFO] Sniff delimiter as ',' [2020-09-15:23:21:14:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:14:INFO] Sniff delimiter as ',' [2020-09-15:23:21:14:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:13:INFO] Sniff delimiter as ',' [2020-09-15:23:21:13:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:13:INFO] Sniff delimiter as ',' [2020-09-15:23:21:13:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:14:INFO] Sniff delimiter as ',' [2020-09-15:23:21:14:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:14:INFO] Sniff delimiter as ',' [2020-09-15:23:21:14:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:16:INFO] Sniff delimiter as ',' [2020-09-15:23:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:16:INFO] Sniff delimiter as ',' [2020-09-15:23:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:16:INFO] Sniff delimiter as ',' [2020-09-15:23:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:16:INFO] Sniff delimiter as ',' [2020-09-15:23:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:16:INFO] Sniff delimiter as ',' [2020-09-15:23:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:16:INFO] Sniff delimiter as ',' [2020-09-15:23:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:16:INFO] Sniff delimiter as ',' [2020-09-15:23:21:16:INFO] Determined delimiter of CSV input is ',' [2020-09-15:23:21:16:INFO] Sniff delimiter as ',' [2020-09-15:23:21:16:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.5 KiB (3.7 MiB/s) with 1 file(s) remaining Completed 366.5 KiB/366.5 KiB (5.1 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-444100773610/xgboost-2020-09-15-23-16-11-742/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output --2020-05-18 20:05:34-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80...connected. HTTP request sent, awaiting response...200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 2.15MB/s in 49s 2020-05-18 20:06:23 (1.63 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Wrote preprocessed data to cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output _____no_output_____ ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output _____no_output_____ ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output /bin/sh: 1: aws: not found ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. ###Output _____no_output_____ ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output /bin/sh: 1: aws: not found ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None ###Output _____no_output_____ ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output _____no_output_____ ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'weari', 'reincarn', '21st', 'playboy', 'victorian', 'ghetto', 'spill'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'omin', 'dubiou', 'masterson', 'optimist', 'orchestr', 'banana', 'sophi'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. ###Code # Make sure that we use SageMaker 1.x !pip install sagemaker==1.72.0 ###Output Collecting sagemaker==1.72.0 Downloading sagemaker-1.72.0.tar.gz (297 kB)  |████████████████████████████████| 297 kB 18.0 MB/s eta 0:00:01 [?25hRequirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.17.76) Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5) Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.15.2) Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.5.3) Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5) Collecting smdebug-rulesconfig==0.1.4 Downloading smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB) Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.7.0) Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.9) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0) Requirement already satisfied: s3transfer<0.5.0,>=0.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.4.2) Requirement already satisfied: botocore<1.21.0,>=1.20.76 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.20.76) Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.76->boto3>=1.14.12->sagemaker==1.72.0) (1.26.4) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.76->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1) Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0) Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3) Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7) Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0) Building wheels for collected packages: sagemaker Building wheel for sagemaker (setup.py) ... [?25ldone [?25h Created wheel for sagemaker: filename=sagemaker-1.72.0-py2.py3-none-any.whl size=386358 sha256=2eb37ecc9d8e560b40f0fdf549dd18daaf221207bd70ba73eb4289f67be9a1ee Stored in directory: /home/ec2-user/.cache/pip/wheels/c3/58/70/85faf4437568bfaa4c419937569ba1fe54d44c5db42406bbd7 Successfully built sagemaker Installing collected packages: smdebug-rulesconfig, sagemaker Attempting uninstall: smdebug-rulesconfig Found existing installation: smdebug-rulesconfig 1.0.1 Uninstalling smdebug-rulesconfig-1.0.1: Successfully uninstalled smdebug-rulesconfig-1.0.1 Attempting uninstall: sagemaker Found existing installation: sagemaker 2.41.0 Uninstalling sagemaker-2.41.0: Successfully uninstalled sagemaker-2.41.0 Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4 ###Markdown Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2021-06-23 13:35:43-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 22.5MB/s in 4.6s 2021-06-23 13:35:47 (17.4 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code # train_X = test_X = None import numpy as np from sklearn.feature_extraction.text import CountVectorizer # from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays import joblib def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: print("Failed to load cache file!") pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary data = None # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) # with open(os.path.join(cache_dir, "bow_features.pkl"), "rb") as f: # cache_data = joblib.load(f) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2021-06-24 08:47:13 Starting - Starting the training job... 2021-06-24 08:47:15 Starting - Launching requested ML instances...... 2021-06-24 08:48:27 Starting - Preparing the instances for training......... 2021-06-24 08:50:06 Downloading - Downloading input data 2021-06-24 08:50:06 Training - Downloading the training image... 2021-06-24 08:50:33 Training - Training image download completed. Training in progress.Arguments: train [2021-06-24:08:50:34:INFO] Running standalone xgboost training. [2021-06-24:08:50:34:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8398.55mb [2021-06-24:08:50:34:INFO] Determined delimiter of CSV input is ',' [08:50:34] S3DistributionType set as FullyReplicated [08:50:36] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2021-06-24:08:50:36:INFO] Determined delimiter of CSV input is ',' [08:50:36] S3DistributionType set as FullyReplicated [08:50:38] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [08:50:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [0]#011train-error:0.294733#011validation-error:0.3034 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [08:50:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.282533#011validation-error:0.285 [08:50:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.271933#011validation-error:0.278 [08:50:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 0 pruned nodes, max_depth=5 [3]#011train-error:0.262467#011validation-error:0.2727 [08:50:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [4]#011train-error:0.260467#011validation-error:0.2715 [08:50:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [5]#011train-error:0.250267#011validation-error:0.2592 [08:50:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 0 pruned nodes, max_depth=5 [6]#011train-error:0.242#011validation-error:0.253 [08:50:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.233867#011validation-error:0.2458 [08:50:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.231067#011validation-error:0.2443 [08:50:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [9]#011train-error:0.2272#011validation-error:0.2414 [08:50:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.219133#011validation-error:0.2364 [08:50:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 0 pruned nodes, max_depth=5 [11]#011train-error:0.215#011validation-error:0.2322 [08:50:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.209867#011validation-error:0.2269 [08:51:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.206067#011validation-error:0.2227 [08:51:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [14]#011train-error:0.201133#011validation-error:0.2224 [08:51:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.196267#011validation-error:0.2179 [08:51:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.195#011validation-error:0.2174 [08:51:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [17]#011train-error:0.1926#011validation-error:0.2161 [08:51:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [18]#011train-error:0.188667#011validation-error:0.2134 [08:51:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.184867#011validation-error:0.2117 [08:51:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [20]#011train-error:0.1822#011validation-error:0.2084 [08:51:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.180333#011validation-error:0.2061 [08:51:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.177333#011validation-error:0.2051 [08:51:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.175467#011validation-error:0.2016 [08:51:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.1718#011validation-error:0.1995 [08:51:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.169933#011validation-error:0.1968 [08:51:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [26]#011train-error:0.168333#011validation-error:0.1957 [08:51:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [27]#011train-error:0.166467#011validation-error:0.1931 [08:51:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [28]#011train-error:0.1648#011validation-error:0.1913 [08:51:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [29]#011train-error:0.161333#011validation-error:0.1915 [08:51:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [30]#011train-error:0.161267#011validation-error:0.1904 [08:51:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [31]#011train-error:0.159533#011validation-error:0.1899 [08:51:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [32]#011train-error:0.158267#011validation-error:0.1883 [08:51:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [33]#011train-error:0.156867#011validation-error:0.1878 [08:51:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.1552#011validation-error:0.1867 [08:51:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.154467#011validation-error:0.1857 [08:51:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.153067#011validation-error:0.1853 [08:51:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 16 pruned nodes, max_depth=5 [37]#011train-error:0.1516#011validation-error:0.1847 [08:51:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [38]#011train-error:0.1502#011validation-error:0.1832 [08:51:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5 [39]#011train-error:0.148733#011validation-error:0.1829 [08:51:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [40]#011train-error:0.1478#011validation-error:0.1831 [08:51:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [41]#011train-error:0.146#011validation-error:0.1822 [08:51:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [42]#011train-error:0.1458#011validation-error:0.1817 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ....................................Arguments: serve [2021-06-24 09:02:55 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2021-06-24 09:02:55 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-06-24 09:02:55 +0000] [1] [INFO] Using worker: gevent [2021-06-24 09:02:55 +0000] [20] [INFO] Booting worker with pid: 20 [2021-06-24 09:02:55 +0000] [21] [INFO] Booting worker with pid: 21 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-06-24:09:02:56:INFO] Model loaded successfully for worker : 20 [2021-06-24 09:02:56 +0000] [22] [INFO] Booting worker with pid: 22 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-06-24:09:02:56:INFO] Model loaded successfully for worker : 21 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-06-24:09:02:56:INFO] Model loaded successfully for worker : 22 [2021-06-24 09:02:56 +0000] [23] [INFO] Booting worker with pid: 23 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-06-24:09:02:56:INFO] Model loaded successfully for worker : 23 2021-06-24T09:02:59.891:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2021-06-24:09:03:03:INFO] Sniff delimiter as ',' [2021-06-24:09:03:03:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:03:INFO] Sniff delimiter as ',' [2021-06-24:09:03:03:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:03:INFO] Sniff delimiter as ',' [2021-06-24:09:03:03:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:03:INFO] Sniff delimiter as ',' [2021-06-24:09:03:03:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:03:INFO] Sniff delimiter as ',' [2021-06-24:09:03:03:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:03:INFO] Sniff delimiter as ',' [2021-06-24:09:03:03:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:03:INFO] Sniff delimiter as ',' [2021-06-24:09:03:03:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:03:INFO] Sniff delimiter as ',' [2021-06-24:09:03:03:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:06:INFO] Sniff delimiter as ',' [2021-06-24:09:03:06:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:07:INFO] Sniff delimiter as ',' [2021-06-24:09:03:07:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:06:INFO] Sniff delimiter as ',' [2021-06-24:09:03:06:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:07:INFO] Sniff delimiter as ',' [2021-06-24:09:03:07:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:07:INFO] Sniff delimiter as ',' [2021-06-24:09:03:07:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:07:INFO] Sniff delimiter as ',' [2021-06-24:09:03:07:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:07:INFO] Sniff delimiter as ',' [2021-06-24:09:03:07:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:07:INFO] Sniff delimiter as ',' [2021-06-24:09:03:07:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:10:INFO] Sniff delimiter as ',' [2021-06-24:09:03:10:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:10:INFO] Sniff delimiter as ',' [2021-06-24:09:03:10:INFO] Sniff delimiter as ',' [2021-06-24:09:03:10:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:10:INFO] Sniff delimiter as ',' [2021-06-24:09:03:10:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:11:INFO] Sniff delimiter as ',' [2021-06-24:09:03:11:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:10:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:11:INFO] Sniff delimiter as ',' [2021-06-24:09:03:11:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:11:INFO] Sniff delimiter as ',' [2021-06-24:09:03:11:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:11:INFO] Sniff delimiter as ',' [2021-06-24:09:03:11:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:14:INFO] Sniff delimiter as ',' [2021-06-24:09:03:14:INFO] Sniff delimiter as ',' [2021-06-24:09:03:14:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:15:INFO] Sniff delimiter as ',' [2021-06-24:09:03:15:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:15:INFO] Sniff delimiter as ',' [2021-06-24:09:03:15:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:14:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:15:INFO] Sniff delimiter as ',' [2021-06-24:09:03:15:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:15:INFO] Sniff delimiter as ',' [2021-06-24:09:03:15:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:18:INFO] Sniff delimiter as ',' [2021-06-24:09:03:18:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:18:INFO] Sniff delimiter as ',' [2021-06-24:09:03:18:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:18:INFO] Sniff delimiter as ',' [2021-06-24:09:03:18:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:19:INFO] Sniff delimiter as ',' [2021-06-24:09:03:19:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:18:INFO] Sniff delimiter as ',' [2021-06-24:09:03:18:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:18:INFO] Sniff delimiter as ',' [2021-06-24:09:03:18:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:18:INFO] Sniff delimiter as ',' [2021-06-24:09:03:18:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:03:19:INFO] Sniff delimiter as ',' [2021-06-24:09:03:19:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/475.1 KiB (2.4 MiB/s) with 1 file(s) remaining Completed 475.1 KiB/475.1 KiB (4.1 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-039506837819/xgboost-2021-06-24-08-57-02-181/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code len(new_X[0]) # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary = vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() # new_XV.todense() # len(new_XV[1]) ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) data_dir ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ............................Arguments: serve [2021-06-24 09:28:26 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2021-06-24 09:28:26 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-06-24 09:28:26 +0000] [1] [INFO] Using worker: gevent [2021-06-24 09:28:26 +0000] [21] [INFO] Booting worker with pid: 21 [2021-06-24 09:28:26 +0000] [22] [INFO] Booting worker with pid: 22 [2021-06-24 09:28:26 +0000] [23] [INFO] Booting worker with pid: 23 [2021-06-24 09:28:26 +0000] [24] [INFO] Booting worker with pid: 24 Arguments: serve [2021-06-24 09:28:26 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2021-06-24 09:28:26 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-06-24 09:28:26 +0000] [1] [INFO] Using worker: gevent [2021-06-24 09:28:26 +0000] [21] [INFO] Booting worker with pid: 21 [2021-06-24 09:28:26 +0000] [22] [INFO] Booting worker with pid: 22 [2021-06-24 09:28:26 +0000] [23] [INFO] Booting worker with pid: 23 [2021-06-24 09:28:26 +0000] [24] [INFO] Booting worker with pid: 24 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-06-24:09:28:26:INFO] Model loaded successfully for worker : 21 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-06-24:09:28:26:INFO] Model loaded successfully for worker : 22 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-06-24:09:28:26:INFO] Model loaded successfully for worker : 23 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-06-24:09:28:26:INFO] Model loaded successfully for worker : 24 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-06-24:09:28:26:INFO] Model loaded successfully for worker : 21 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-06-24:09:28:26:INFO] Model loaded successfully for worker : 22 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-06-24:09:28:26:INFO] Model loaded successfully for worker : 23 /opt/amazon/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:65: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['urllib3.util.ssl_ (/opt/amazon/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/opt/amazon/lib/python3.7/site-packages/urllib3/util/__init__.py)']. monkey.patch_all(subprocess=True) [2021-06-24:09:28:26:INFO] Model loaded successfully for worker : 24 2021-06-24T09:28:30.382:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2021-06-24:09:28:35:INFO] Sniff delimiter as ',' [2021-06-24:09:28:35:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:35:INFO] Sniff delimiter as ',' [2021-06-24:09:28:35:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:35:INFO] Sniff delimiter as ',' [2021-06-24:09:28:35:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:35:INFO] Sniff delimiter as ',' [2021-06-24:09:28:35:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:35:INFO] Sniff delimiter as ',' [2021-06-24:09:28:35:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:35:INFO] Sniff delimiter as ',' [2021-06-24:09:28:35:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:35:INFO] Sniff delimiter as ',' [2021-06-24:09:28:35:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:35:INFO] Sniff delimiter as ',' [2021-06-24:09:28:35:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:38:INFO] Sniff delimiter as ',' [2021-06-24:09:28:38:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:38:INFO] Sniff delimiter as ',' [2021-06-24:09:28:38:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:39:INFO] Sniff delimiter as ',' [2021-06-24:09:28:39:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:39:INFO] Sniff delimiter as ',' [2021-06-24:09:28:39:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:39:INFO] Sniff delimiter as ',' [2021-06-24:09:28:39:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:39:INFO] Sniff delimiter as ',' [2021-06-24:09:28:39:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:39:INFO] Sniff delimiter as ',' [2021-06-24:09:28:39:INFO] Determined delimiter of CSV input is ',' [2021-06-24:09:28:39:INFO] Sniff delimiter as ',' [2021-06-24:09:28:39:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir xgb_transformer.output_path data_dir ###Output _____no_output_____ ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2021-06-24-08-47-13-061 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['yeah', 'archetyp', 'simpl', 'inspir', 'movi', 'end', 'entir', 'crowd', 'stadium', 'get', 'peopl', 'rais', 'hand', 'give', 'chill', 'whenev', 'see', 'brilliant', 'joseph', 'wonder', 'lone', 'sad', 'kid', 'far', 'disappoint', 'anyon', 'anyth', 'life', 'way', 'interact', 'danni', 'glover', 'tri', 'make', 'believ', 'magic', 'angel', 'funni', 'exhilar', 'nice', 'famili', 'movi', 'conced', 'rather', 'corni', 'happi', 'end', 'hey', 'realli', 'matter', 'movi', 'retain', 'basic', 'qualiti', 'good', 'act', 'inspir', 'theme', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:489: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output _____no_output_____ ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output _____no_output_____ ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-05-20 16:37:38-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 10.4MB/s in 10s 2020-05-20 16:37:49 (7.68 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost', '0.90-1') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-05-21 09:26:43 Starting - Starting the training job... 2020-05-21 09:26:45 Starting - Launching requested ML instances......... 2020-05-21 09:28:17 Starting - Preparing the instances for training...... 2020-05-21 09:29:31 Downloading - Downloading input data... 2020-05-21 09:29:57 Training - Downloading the training image... 2020-05-21 09:30:28 Training - Training image download completed. Training in progress.INFO:sagemaker-containers:Imported framework sagemaker_xgboost_container.training INFO:sagemaker-containers:Failed to parse hyperparameter objective value binary:logistic to Json. Returning the value itself INFO:sagemaker-containers:No GPUs detected (normal if no gpus installed) INFO:sagemaker_xgboost_container.training:Running XGBoost Sagemaker in algorithm mode INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' [09:30:32] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, INFO:root:Determined delimiter of CSV input is ',' [09:30:33] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, INFO:root:Single node training. INFO:root:Train matrix has 15000 rows INFO:root:Validation matrix has 10000 rows [0]#011train-error:0.295533#011validation-error:0.2995 [1]#011train-error:0.280733#011validation-error:0.2847 [2]#011train-error:0.276733#011validation-error:0.2832 [3]#011train-error:0.266667#011validation-error:0.2754 [4]#011train-error:0.260533#011validation-error:0.2695 [5]#011train-error:0.2584#011validation-error:0.2687 [6]#011train-error:0.253#011validation-error:0.2629 [7]#011train-error:0.235467#011validation-error:0.2494 [8]#011train-error:0.2338#011validation-error:0.2476 [9]#011train-error:0.225#011validation-error:0.2398 [10]#011train-error:0.217333#011validation-error:0.235 [11]#011train-error:0.213933#011validation-error:0.2307 [12]#011train-error:0.210133#011validation-error:0.2272 [13]#011train-error:0.204267#011validation-error:0.2257 [14]#011train-error:0.199533#011validation-error:0.2223 [15]#011train-error:0.194667#011validation-error:0.2193 [16]#011train-error:0.194#011validation-error:0.2189 [17]#011train-error:0.1888#011validation-error:0.2163 [18]#011train-error:0.186933#011validation-error:0.2142 [19]#011train-error:0.1832#011validation-error:0.2116 [20]#011train-error:0.1824#011validation-error:0.2122 [21]#011train-error:0.1786#011validation-error:0.2103 [22]#011train-error:0.1784#011validation-error:0.2065 [23]#011train-error:0.174467#011validation-error:0.2042 [24]#011train-error:0.171933#011validation-error:0.2037 [25]#011train-error:0.1708#011validation-error:0.2019 [26]#011train-error:0.170667#011validation-error:0.2007 [27]#011train-error:0.1674#011validation-error:0.1986 [28]#011train-error:0.165467#011validation-error:0.1966 [29]#011train-error:0.163733#011validation-error:0.1954 [30]#011train-error:0.163133#011validation-error:0.1955 [31]#011train-error:0.162067#011validation-error:0.1935 [32]#011train-error:0.158667#011validation-error:0.1903 [33]#011train-error:0.157867#011validation-error:0.1896 [34]#011train-error:0.156933#011validation-error:0.1891 [35]#011train-error:0.1556#011validation-error:0.1878 [36]#011train-error:0.1554#011validation-error:0.1876 [37]#011train-error:0.154533#011validation-error:0.1868 [38]#011train-error:0.1544#011validation-error:0.1856 [39]#011train-error:0.1528#011validation-error:0.1861 [40]#011train-error:0.1514#011validation-error:0.1849 [41]#011train-error:0.150467#011validation-error:0.1839 [42]#011train-error:0.148467#011validation-error:0.181 [43]#011train-error:0.146867#011validation-error:0.1809 [44]#011train-error:0.146467#011validation-error:0.1797 [45]#011train-error:0.1462#011validation-error:0.1788 [46]#011train-error:0.145667#011validation-error:0.1779 [47]#011train-error:0.1448#011validation-error:0.1767 [48]#011train-error:0.1436#011validation-error:0.1761 [49]#011train-error:0.143333#011validation-error:0.1761 [50]#011train-error:0.142#011validation-error:0.175 [51]#011train-error:0.1402#011validation-error:0.1741 [52]#011train-error:0.139267#011validation-error:0.1745 [53]#011train-error:0.137333#011validation-error:0.1738 [54]#011train-error:0.137133#011validation-error:0.1741 [55]#011train-error:0.1362#011validation-error:0.1725 [56]#011train-error:0.135133#011validation-error:0.1724 [57]#011train-error:0.134133#011validation-error:0.1713 [58]#011train-error:0.133067#011validation-error:0.1701 [59]#011train-error:0.133#011validation-error:0.1685 [60]#011train-error:0.132667#011validation-error:0.1676 [61]#011train-error:0.131733#011validation-error:0.1673 [62]#011train-error:0.131333#011validation-error:0.167 [63]#011train-error:0.130867#011validation-error:0.1668 [64]#011train-error:0.1292#011validation-error:0.1666 [65]#011train-error:0.128867#011validation-error:0.1662 [66]#011train-error:0.1278#011validation-error:0.1655 [67]#011train-error:0.127467#011validation-error:0.1639 [68]#011train-error:0.126267#011validation-error:0.1646 [69]#011train-error:0.126067#011validation-error:0.1649 [70]#011train-error:0.1248#011validation-error:0.1646 [71]#011train-error:0.124#011validation-error:0.1635 [72]#011train-error:0.123667#011validation-error:0.1638 [73]#011train-error:0.123333#011validation-error:0.1641 [74]#011train-error:0.122533#011validation-error:0.1632 [75]#011train-error:0.121667#011validation-error:0.164 [76]#011train-error:0.1206#011validation-error:0.1629 [77]#011train-error:0.1198#011validation-error:0.1629 [78]#011train-error:0.119333#011validation-error:0.1632 [79]#011train-error:0.1192#011validation-error:0.1623 [80]#011train-error:0.118267#011validation-error:0.1614 [81]#011train-error:0.118#011validation-error:0.1604 [82]#011train-error:0.117067#011validation-error:0.1587 [83]#011train-error:0.116667#011validation-error:0.1583 [84]#011train-error:0.115533#011validation-error:0.1588 [85]#011train-error:0.115067#011validation-error:0.1592 [86]#011train-error:0.1146#011validation-error:0.1591 [87]#011train-error:0.113667#011validation-error:0.1596 [88]#011train-error:0.113#011validation-error:0.1584 [89]#011train-error:0.112267#011validation-error:0.1562 [90]#011train-error:0.111667#011validation-error:0.1562 [91]#011train-error:0.110733#011validation-error:0.156 [92]#011train-error:0.110133#011validation-error:0.1549 [93]#011train-error:0.109867#011validation-error:0.1546 [94]#011train-error:0.109667#011validation-error:0.1548 [95]#011train-error:0.109067#011validation-error:0.1545 [96]#011train-error:0.1088#011validation-error:0.1528 [97]#011train-error:0.109#011validation-error:0.1524 [98]#011train-error:0.108667#011validation-error:0.1524 [99]#011train-error:0.1084#011validation-error:0.1526 [100]#011train-error:0.108667#011validation-error:0.1516 [101]#011train-error:0.107867#011validation-error:0.1516 [102]#011train-error:0.107867#011validation-error:0.151 [103]#011train-error:0.107067#011validation-error:0.1509 [104]#011train-error:0.107067#011validation-error:0.1506 [105]#011train-error:0.106467#011validation-error:0.1501 [106]#011train-error:0.106467#011validation-error:0.1499 [107]#011train-error:0.105867#011validation-error:0.1497 [108]#011train-error:0.105133#011validation-error:0.1496 [109]#011train-error:0.104533#011validation-error:0.149 [110]#011train-error:0.1044#011validation-error:0.1485 [111]#011train-error:0.104067#011validation-error:0.1471 [112]#011train-error:0.1032#011validation-error:0.1469 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ......................[2020-05-21 09:37:22 +0000] [14] [INFO] Starting gunicorn 19.10.0 [2020-05-21 09:37:22 +0000] [14] [INFO] Listening at: unix:/tmp/gunicorn.sock (14) [2020-05-21 09:37:22 +0000] [14] [INFO] Using worker: gevent [2020-05-21 09:37:22 +0000] [21] [INFO] Booting worker with pid: 21 [2020-05-21 09:37:22 +0000] [22] [INFO] Booting worker with pid: 22 [2020-05-21 09:37:22 +0000] [26] [INFO] Booting worker with pid: 26 [2020-05-21 09:37:22 +0000] [27] [INFO] Booting worker with pid: 27 [2020-05-21:09:37:36:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [21/May/2020:09:37:36 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" [2020-05-21:09:37:36:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [21/May/2020:09:37:36 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-05-21:09:37:36:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [21/May/2020:09:37:36 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" [2020-05-21:09:37:36:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [21/May/2020:09:37:36 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-05-21:09:37:39:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:39:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:39:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:09:37:39:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:09:37:39:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:39:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:39:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:39:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:09:37:39:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:09:37:39:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:39:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:39:INFO] Determined delimiter of CSV input is ',' 2020-05-21T09:37:36.345:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD 169.254.255.130 - - [21/May/2020:09:37:42 +0000] "POST /invocations HTTP/1.1" 200 12216 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:42 +0000] "POST /invocations HTTP/1.1" 200 12216 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:43 +0000] "POST /invocations HTTP/1.1" 200 12222 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:43 +0000] "POST /invocations HTTP/1.1" 200 12148 "-" "Go-http-client/1.1" [2020-05-21:09:37:43:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:37:43 +0000] "POST /invocations HTTP/1.1" 200 12165 "-" "Go-http-client/1.1" [2020-05-21:09:37:43:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:43:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:43:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:37:43 +0000] "POST /invocations HTTP/1.1" 200 12222 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:43 +0000] "POST /invocations HTTP/1.1" 200 12148 "-" "Go-http-client/1.1" [2020-05-21:09:37:43:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:37:43 +0000] "POST /invocations HTTP/1.1" 200 12165 "-" "Go-http-client/1.1" [2020-05-21:09:37:43:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:43:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:43:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:37:46 +0000] "POST /invocations HTTP/1.1" 200 12224 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:46 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:46 +0000] "POST /invocations HTTP/1.1" 200 12224 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:46 +0000] "POST /invocations HTTP/1.1" 200 12194 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:46 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:46 +0000] "POST /invocations HTTP/1.1" 200 12195 "-" "Go-http-client/1.1" [2020-05-21:09:37:46:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:46:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:46:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:47:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:37:46 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:46 +0000] "POST /invocations HTTP/1.1" 200 12195 "-" "Go-http-client/1.1" [2020-05-21:09:37:46:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:46:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:46:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:50:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:37:53 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:53 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:53 +0000] "POST /invocations HTTP/1.1" 200 12167 "-" "Go-http-client/1.1" [2020-05-21:09:37:53:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:37:53 +0000] "POST /invocations HTTP/1.1" 200 12191 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:53 +0000] "POST /invocations HTTP/1.1" 200 12196 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:53 +0000] "POST /invocations HTTP/1.1" 200 12167 "-" "Go-http-client/1.1" [2020-05-21:09:37:53:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:37:53 +0000] "POST /invocations HTTP/1.1" 200 12191 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:37:53 +0000] "POST /invocations HTTP/1.1" 200 12196 "-" "Go-http-client/1.1" [2020-05-21:09:37:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:57:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:37:57 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" [2020-05-21:09:37:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:57:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:37:57 +0000] "POST /invocations HTTP/1.1" 200 12204 "-" "Go-http-client/1.1" [2020-05-21:09:37:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:37:57:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:38:00 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:38:00 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" [2020-05-21:09:38:00:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:38:00 +0000] "POST /invocations HTTP/1.1" 200 12192 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:38:00 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:38:01 +0000] "POST /invocations HTTP/1.1" 200 12225 "-" "Go-http-client/1.1" [2020-05-21:09:38:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:38:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:38:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:38:00:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:38:00 +0000] "POST /invocations HTTP/1.1" 200 12192 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:38:00 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:38:01 +0000] "POST /invocations HTTP/1.1" 200 12225 "-" "Go-http-client/1.1" [2020-05-21:09:38:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:38:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:38:01:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:38:04 +0000] "POST /invocations HTTP/1.1" 200 12210 "-" "Go-http-client/1.1" [2020-05-21:09:38:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:38:04 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:38:04 +0000] "POST /invocations HTTP/1.1" 200 12209 "-" "Go-http-client/1.1" [2020-05-21:09:38:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:38:04 +0000] "POST /invocations HTTP/1.1" 200 12213 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:38:04 +0000] "POST /invocations HTTP/1.1" 200 12210 "-" "Go-http-client/1.1" [2020-05-21:09:38:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:38:04 +0000] "POST /invocations HTTP/1.1" 200 12208 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:38:04 +0000] "POST /invocations HTTP/1.1" 200 12209 "-" "Go-http-client/1.1" [2020-05-21:09:38:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:38:04 +0000] "POST /invocations HTTP/1.1" 200 12213 "-" "Go-http-client/1.1" [2020-05-21:09:38:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:38:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:38:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:38:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:38:07 +0000] "POST /invocations HTTP/1.1" 200 12242 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:38:07 +0000] "POST /invocations HTTP/1.1" 200 12195 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:38:08 +0000] "POST /invocations HTTP/1.1" 200 12188 "-" "Go-http-client/1.1" [2020-05-21:09:38:08:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:38:08:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:38:08 +0000] "POST /invocations HTTP/1.1" 200 12226 "-" "Go-http-client/1.1" [2020-05-21:09:38:08:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:38:08:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:38:07 +0000] "POST /invocations HTTP/1.1" 200 12242 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:38:07 +0000] "POST /invocations HTTP/1.1" 200 12195 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:38:08 +0000] "POST /invocations HTTP/1.1" 200 12188 "-" "Go-http-client/1.1" [2020-05-21:09:38:08:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:38:08:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:38:08 +0000] "POST /invocations HTTP/1.1" 200 12226 "-" "Go-http-client/1.1" [2020-05-21:09:38:08:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:38:08:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/473.5 KiB (4.2 MiB/s) with 1 file(s) remaining Completed 473.5 KiB/473.5 KiB (7.5 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-west-1-019518462631/sagemaker-xgboost-2020-05-21-09-33-58-328/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) new_XV[0] ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ......................[2020-05-21 09:49:55 +0000] [14] [INFO] Starting gunicorn 19.10.0 [2020-05-21 09:49:55 +0000] [14] [INFO] Listening at: unix:/tmp/gunicorn.sock (14) [2020-05-21 09:49:55 +0000] [14] [INFO] Using worker: gevent [2020-05-21 09:49:55 +0000] [21] [INFO] Booting worker with pid: 21 [2020-05-21 09:49:55 +0000] [22] [INFO] Booting worker with pid: 22 [2020-05-21 09:49:55 +0000] [29] [INFO] Booting worker with pid: 29 [2020-05-21 09:49:55 +0000] [30] [INFO] Booting worker with pid: 30 [2020-05-21:09:50:17:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [21/May/2020:09:50:17 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" [2020-05-21:09:50:17:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [21/May/2020:09:50:17 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-05-21:09:50:17:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [21/May/2020:09:50:17 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" [2020-05-21:09:50:17:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [21/May/2020:09:50:17 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" 2020-05-21T09:50:17.162:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-21:09:50:19:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:09:50:19:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:19:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:09:50:19:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:19:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:09:50:19:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:20:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:20:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:19:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:09:50:19:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:20:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:20:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:23 +0000] "POST /invocations HTTP/1.1" 200 12231 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:23 +0000] "POST /invocations HTTP/1.1" 200 12225 "-" "Go-http-client/1.1" [2020-05-21:09:50:23:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:23 +0000] "POST /invocations HTTP/1.1" 200 12231 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:23 +0000] "POST /invocations HTTP/1.1" 200 12225 "-" "Go-http-client/1.1" [2020-05-21:09:50:23:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:23 +0000] "POST /invocations HTTP/1.1" 200 12171 "-" "Go-http-client/1.1" [2020-05-21:09:50:23:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:23:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:23 +0000] "POST /invocations HTTP/1.1" 200 12178 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:23 +0000] "POST /invocations HTTP/1.1" 200 12171 "-" "Go-http-client/1.1" [2020-05-21:09:50:23:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:23:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:23 +0000] "POST /invocations HTTP/1.1" 200 12178 "-" "Go-http-client/1.1" [2020-05-21:09:50:24:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:24:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:26 +0000] "POST /invocations HTTP/1.1" 200 12202 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:26 +0000] "POST /invocations HTTP/1.1" 200 12191 "-" "Go-http-client/1.1" [2020-05-21:09:50:27:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:27 +0000] "POST /invocations HTTP/1.1" 200 12226 "-" "Go-http-client/1.1" [2020-05-21:09:50:27:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:27 +0000] "POST /invocations HTTP/1.1" 200 12201 "-" "Go-http-client/1.1" [2020-05-21:09:50:27:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:27:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:26 +0000] "POST /invocations HTTP/1.1" 200 12202 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:26 +0000] "POST /invocations HTTP/1.1" 200 12191 "-" "Go-http-client/1.1" [2020-05-21:09:50:27:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:27 +0000] "POST /invocations HTTP/1.1" 200 12226 "-" "Go-http-client/1.1" [2020-05-21:09:50:27:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:27 +0000] "POST /invocations HTTP/1.1" 200 12201 "-" "Go-http-client/1.1" [2020-05-21:09:50:27:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:27:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:34 +0000] "POST /invocations HTTP/1.1" 200 12186 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:34 +0000] "POST /invocations HTTP/1.1" 200 12193 "-" "Go-http-client/1.1" [2020-05-21:09:50:34:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:34 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:34 +0000] "POST /invocations HTTP/1.1" 200 12197 "-" "Go-http-client/1.1" [2020-05-21:09:50:34:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:34:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:34:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:34 +0000] "POST /invocations HTTP/1.1" 200 12186 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:34 +0000] "POST /invocations HTTP/1.1" 200 12193 "-" "Go-http-client/1.1" [2020-05-21:09:50:34:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:34 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:34 +0000] "POST /invocations HTTP/1.1" 200 12197 "-" "Go-http-client/1.1" [2020-05-21:09:50:34:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:34:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:34:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:41 +0000] "POST /invocations HTTP/1.1" 200 12232 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:41 +0000] "POST /invocations HTTP/1.1" 200 12184 "-" "Go-http-client/1.1" [2020-05-21:09:50:41:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:41 +0000] "POST /invocations HTTP/1.1" 200 12227 "-" "Go-http-client/1.1" [2020-05-21:09:50:41:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:41 +0000] "POST /invocations HTTP/1.1" 200 12190 "-" "Go-http-client/1.1" [2020-05-21:09:50:41:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:41:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:41 +0000] "POST /invocations HTTP/1.1" 200 12232 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:41 +0000] "POST /invocations HTTP/1.1" 200 12184 "-" "Go-http-client/1.1" [2020-05-21:09:50:41:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:41 +0000] "POST /invocations HTTP/1.1" 200 12227 "-" "Go-http-client/1.1" [2020-05-21:09:50:41:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:41 +0000] "POST /invocations HTTP/1.1" 200 12190 "-" "Go-http-client/1.1" [2020-05-21:09:50:41:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:41:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:44 +0000] "POST /invocations HTTP/1.1" 200 12193 "-" "Go-http-client/1.1" [2020-05-21:09:50:44:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:44 +0000] "POST /invocations HTTP/1.1" 200 12193 "-" "Go-http-client/1.1" [2020-05-21:09:50:44:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:44 +0000] "POST /invocations HTTP/1.1" 200 12216 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:45 +0000] "POST /invocations HTTP/1.1" 200 12192 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:45 +0000] "POST /invocations HTTP/1.1" 200 12210 "-" "Go-http-client/1.1" [2020-05-21:09:50:45:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:45:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:45:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:44 +0000] "POST /invocations HTTP/1.1" 200 12216 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:45 +0000] "POST /invocations HTTP/1.1" 200 12192 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:45 +0000] "POST /invocations HTTP/1.1" 200 12210 "-" "Go-http-client/1.1" [2020-05-21:09:50:45:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:45:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:45:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:48 +0000] "POST /invocations HTTP/1.1" 200 12195 "-" "Go-http-client/1.1" [2020-05-21:09:50:48:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:48 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:48 +0000] "POST /invocations HTTP/1.1" 200 12244 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:48 +0000] "POST /invocations HTTP/1.1" 200 12226 "-" "Go-http-client/1.1" [2020-05-21:09:50:48:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:48:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:48 +0000] "POST /invocations HTTP/1.1" 200 12195 "-" "Go-http-client/1.1" [2020-05-21:09:50:48:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:09:50:48 +0000] "POST /invocations HTTP/1.1" 200 12187 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:48 +0000] "POST /invocations HTTP/1.1" 200 12244 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:09:50:48 +0000] "POST /invocations HTTP/1.1" 200 12226 "-" "Go-http-client/1.1" [2020-05-21:09:50:48:INFO] Determined delimiter of CSV input is ',' [2020-05-21:09:50:48:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/473.7 KiB (4.0 MiB/s) with 1 file(s) remaining Completed 473.7 KiB/473.7 KiB (7.1 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-west-1-019518462631/sagemaker-xgboost-2020-05-21-09-46-39-394/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Using already existing model: sagemaker-xgboost-2020-05-21-09-26-43-277 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['oh', 'god', 'must', 'seen', '11', 'twelv', 'ask', 'may', 'young', 'stupid', 'anyon', 'could', 'see', 'bad', 'movi', 'nasti', 'gross', 'unscari', 'silli', 'seen', 'impress', 'effect', 'disneyland', 'seen', 'better', 'perform', 'school', 'play', 'seen', 'convinc', 'crocodil', 'zoo', 'noth', 'sit', 'water', 'ignor', 'children', 'tap', 'glass', 'stori', 'set', 'northern', 'australia', 'hand', 'ambiti', 'young', 'peopl', 'tri', 'new', 'water', 'sport', 'surf', 'shark', 'fill', 'water', 'soon', 'becom', 'evid', 'someth', 'danger', 'water', 'learn', 'get', 'help', 'grizzli', 'middl', 'age', 'fisherman', 'want', 'kill', 'anim', 'aveng', 'eat', 'famili', 'think', 'seen', 'everi', 'crocodil', 'film', 'made', 'last', 'fifteen', 'year', 'best', 'lake', 'placid', 'wors', 'sequel', 'blood', 'surf', 'would', 'second', 'worst', 'croc', 'flick', 'think', 'primev', 'crocodil', 'tail', 'close', 'behind', 'australian', 'saltwat', 'crododil', 'one', 'danger', 'creatur', 'result', 'hundr', 'injuri', 'death', 'everi', 'year', 'movi', 'like', 'blood', 'surf', 'howev', 'ruin', 'feroci', 'imag', 'creatur', 'good', 'hour', 'half', 'viewer', 'life', 'unless', 'realli', 'want', 'see', 'avoid', 'blood', 'surf', 'banana'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'reincarn', 'playboy', '21st', 'ghetto', 'spill', 'victorian', 'weari'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'omin', 'dubiou', 'optimist', 'banana', 'sophi', 'masterson', 'orchestr'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. ###Code for word in (original_vocabulary - new_vocabulary): print(f"Word {word}: {vocabulary.get(word, '-')} vs {new_vectorizer.vocabulary_.get(word, '-')}") for word in (new_vocabulary - original_vocabulary): print(f"Word {word}: {vocabulary.get(word, '-')} vs {new_vectorizer.vocabulary_.get(word, '-')}") sum(vocabulary.values()) / len(vocabulary.values()) sum(new_vectorizer.vocabulary_.values()) / len(new_vectorizer.vocabulary_.values()) ###Output _____no_output_____ ###Markdown On average each word appears about 2500 times in the training data set.5 of the 7 words that no longer appear in the new vocabulary occur more often than the average, meaning they held significance in the old training set. The same is true for the new words appearing in the new vocabulary. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-05-21 10:22:27 Starting - Starting the training job... 2020-05-21 10:22:29 Starting - Launching requested ML instances......... 2020-05-21 10:24:02 Starting - Preparing the instances for training... 2020-05-21 10:24:51 Downloading - Downloading input data... 2020-05-21 10:25:24 Training - Downloading the training image... 2020-05-21 10:25:45 Training - Training image download completed. Training in progress..INFO:sagemaker-containers:Imported framework sagemaker_xgboost_container.training INFO:sagemaker-containers:Failed to parse hyperparameter objective value binary:logistic to Json. Returning the value itself INFO:sagemaker-containers:No GPUs detected (normal if no gpus installed) INFO:sagemaker_xgboost_container.training:Running XGBoost Sagemaker in algorithm mode INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' INFO:root:Determined delimiter of CSV input is ',' [10:25:49] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, INFO:root:Determined delimiter of CSV input is ',' [10:25:50] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, INFO:root:Single node training. INFO:root:Train matrix has 15000 rows INFO:root:Validation matrix has 10000 rows [0]#011train-error:0.301667#011validation-error:0.3049 [1]#011train-error:0.2854#011validation-error:0.2891 [2]#011train-error:0.284533#011validation-error:0.2909 [3]#011train-error:0.2804#011validation-error:0.2881 [4]#011train-error:0.268933#011validation-error:0.2785 [5]#011train-error:0.260867#011validation-error:0.2736 [6]#011train-error:0.254267#011validation-error:0.2702 [7]#011train-error:0.248467#011validation-error:0.2639 [8]#011train-error:0.2414#011validation-error:0.2571 [9]#011train-error:0.234933#011validation-error:0.2538 [10]#011train-error:0.226067#011validation-error:0.2477 [11]#011train-error:0.2218#011validation-error:0.2436 [12]#011train-error:0.219067#011validation-error:0.2402 [13]#011train-error:0.2138#011validation-error:0.2411 [14]#011train-error:0.2116#011validation-error:0.2367 [15]#011train-error:0.207333#011validation-error:0.2358 [16]#011train-error:0.201067#011validation-error:0.2323 [17]#011train-error:0.199#011validation-error:0.2304 [18]#011train-error:0.1962#011validation-error:0.2259 [19]#011train-error:0.193867#011validation-error:0.2244 [20]#011train-error:0.191867#011validation-error:0.2242 [21]#011train-error:0.1896#011validation-error:0.2234 [22]#011train-error:0.1866#011validation-error:0.2219 [23]#011train-error:0.185267#011validation-error:0.2208 [24]#011train-error:0.182#011validation-error:0.218 [25]#011train-error:0.180867#011validation-error:0.2154 [26]#011train-error:0.178533#011validation-error:0.2139 [27]#011train-error:0.176533#011validation-error:0.2144 [28]#011train-error:0.175867#011validation-error:0.2124 [29]#011train-error:0.175#011validation-error:0.2113 [30]#011train-error:0.172867#011validation-error:0.2112 [31]#011train-error:0.1706#011validation-error:0.2096 [32]#011train-error:0.17#011validation-error:0.208 [33]#011train-error:0.168133#011validation-error:0.2058 [34]#011train-error:0.166867#011validation-error:0.2048 [35]#011train-error:0.165667#011validation-error:0.2049 [36]#011train-error:0.1638#011validation-error:0.2042 [37]#011train-error:0.1636#011validation-error:0.2036 [38]#011train-error:0.162667#011validation-error:0.2028 [39]#011train-error:0.1614#011validation-error:0.2025 [40]#011train-error:0.161067#011validation-error:0.2014 [41]#011train-error:0.1592#011validation-error:0.2003 [42]#011train-error:0.157733#011validation-error:0.1985 [43]#011train-error:0.157467#011validation-error:0.1981 [44]#011train-error:0.156533#011validation-error:0.1955 [45]#011train-error:0.155733#011validation-error:0.1951 [46]#011train-error:0.1534#011validation-error:0.196 [47]#011train-error:0.153067#011validation-error:0.1949 [48]#011train-error:0.152733#011validation-error:0.194 [49]#011train-error:0.150933#011validation-error:0.1934 [50]#011train-error:0.1502#011validation-error:0.1924 [51]#011train-error:0.149#011validation-error:0.1929 [52]#011train-error:0.1492#011validation-error:0.1919 [53]#011train-error:0.148667#011validation-error:0.1913 [54]#011train-error:0.147267#011validation-error:0.1908 [55]#011train-error:0.145333#011validation-error:0.1892 [56]#011train-error:0.143867#011validation-error:0.1878 [57]#011train-error:0.142133#011validation-error:0.1873 [58]#011train-error:0.1414#011validation-error:0.1878 [59]#011train-error:0.1412#011validation-error:0.1867 [60]#011train-error:0.1402#011validation-error:0.1876 [61]#011train-error:0.139467#011validation-error:0.1892 [62]#011train-error:0.139333#011validation-error:0.1889 [63]#011train-error:0.139467#011validation-error:0.189 [64]#011train-error:0.139133#011validation-error:0.1876 [65]#011train-error:0.138333#011validation-error:0.187 [66]#011train-error:0.138133#011validation-error:0.1861 [67]#011train-error:0.137267#011validation-error:0.1849 [68]#011train-error:0.1366#011validation-error:0.1842 [69]#011train-error:0.1358#011validation-error:0.1841 [70]#011train-error:0.134133#011validation-error:0.1845 [71]#011train-error:0.133667#011validation-error:0.1833 [72]#011train-error:0.1336#011validation-error:0.1822 [73]#011train-error:0.132333#011validation-error:0.182 [74]#011train-error:0.132067#011validation-error:0.1824 [75]#011train-error:0.131333#011validation-error:0.1821 [76]#011train-error:0.1308#011validation-error:0.1824 [77]#011train-error:0.130533#011validation-error:0.1824 [78]#011train-error:0.129667#011validation-error:0.1812 [79]#011train-error:0.130067#011validation-error:0.1812 [80]#011train-error:0.1298#011validation-error:0.1815 [81]#011train-error:0.129#011validation-error:0.1818 [82]#011train-error:0.129#011validation-error:0.1806 [83]#011train-error:0.128333#011validation-error:0.1811 [84]#011train-error:0.127867#011validation-error:0.1792 [85]#011train-error:0.127467#011validation-error:0.179 [86]#011train-error:0.127333#011validation-error:0.1794 [87]#011train-error:0.126933#011validation-error:0.1788 [88]#011train-error:0.126533#011validation-error:0.1794 [89]#011train-error:0.126333#011validation-error:0.1802 [90]#011train-error:0.124933#011validation-error:0.18 [91]#011train-error:0.1252#011validation-error:0.1804 [92]#011train-error:0.123467#011validation-error:0.1773 [93]#011train-error:0.123067#011validation-error:0.1769 [94]#011train-error:0.1228#011validation-error:0.1773 [95]#011train-error:0.122533#011validation-error:0.177 2020-05-21 10:27:55 Uploading - Uploading generated training model[96]#011train-error:0.122533#011validation-error:0.1776 [97]#011train-error:0.121133#011validation-error:0.1778 [98]#011train-error:0.1212#011validation-error:0.178 [99]#011train-error:0.120333#011validation-error:0.1777 [100]#011train-error:0.121067#011validation-error:0.1763 [101]#011train-error:0.120133#011validation-error:0.1765 [102]#011train-error:0.119333#011validation-error:0.1766 [103]#011train-error:0.118667#011validation-error:0.1764 [104]#011train-error:0.118#011validation-error:0.1769 [105]#011train-error:0.1178#011validation-error:0.1773 [106]#011train-error:0.118133#011validation-error:0.1768 [107]#011train-error:0.117933#011validation-error:0.1772 [108]#011train-error:0.117267#011validation-error:0.1777 [109]#011train-error:0.117467#011validation-error:0.1775 [110]#011train-error:0.1172#011validation-error:0.1771 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output .........................[2020-05-21 10:32:35 +0000] [14] [INFO] Starting gunicorn 19.10.0 [2020-05-21 10:32:35 +0000] [14] [INFO] Listening at: unix:/tmp/gunicorn.sock (14) [2020-05-21 10:32:35 +0000] [14] [INFO] Using worker: gevent [2020-05-21 10:32:35 +0000] [21] [INFO] Booting worker with pid: 21 [2020-05-21 10:32:35 +0000] [22] [INFO] Booting worker with pid: 22 [2020-05-21 10:32:35 +0000] [26] [INFO] Booting worker with pid: 26 [2020-05-21 10:32:35 +0000] [27] [INFO] Booting worker with pid: 27 [2020-05-21 10:32:35 +0000] [14] [INFO] Starting gunicorn 19.10.0 [2020-05-21 10:32:35 +0000] [14] [INFO] Listening at: unix:/tmp/gunicorn.sock (14) [2020-05-21 10:32:35 +0000] [14] [INFO] Using worker: gevent [2020-05-21 10:32:35 +0000] [21] [INFO] Booting worker with pid: 21 [2020-05-21 10:32:35 +0000] [22] [INFO] Booting worker with pid: 22 [2020-05-21 10:32:35 +0000] [26] [INFO] Booting worker with pid: 26 [2020-05-21 10:32:35 +0000] [27] [INFO] Booting worker with pid: 27 [2020-05-21:10:32:40:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [21/May/2020:10:32:40 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:32:40 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-05-21:10:32:40:INFO] No GPUs detected (normal if no gpus installed) 169.254.255.130 - - [21/May/2020:10:32:40 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:32:40 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1" [2020-05-21:10:32:42:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:10:32:42:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:10:32:42:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:10:32:42:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:10:32:42:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:42:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:43:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:10:32:43:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:43:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:42:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:42:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:43:INFO] No GPUs detected (normal if no gpus installed) [2020-05-21:10:32:43:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:43:INFO] Determined delimiter of CSV input is ',' 2020-05-21T10:32:40.602:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD 169.254.255.130 - - [21/May/2020:10:32:47 +0000] "POST /invocations HTTP/1.1" 200 12101 "-" "Go-http-client/1.1" [2020-05-21:10:32:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:47:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:32:47 +0000] "POST /invocations HTTP/1.1" 200 12101 "-" "Go-http-client/1.1" [2020-05-21:10:32:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:47:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:32:49 +0000] "POST /invocations HTTP/1.1" 200 12107 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:32:49 +0000] "POST /invocations HTTP/1.1" 200 12107 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:32:50 +0000] "POST /invocations HTTP/1.1" 200 12104 "-" "Go-http-client/1.1" [2020-05-21:10:32:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:50:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:32:50 +0000] "POST /invocations HTTP/1.1" 200 12106 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:32:50 +0000] "POST /invocations HTTP/1.1" 200 12104 "-" "Go-http-client/1.1" [2020-05-21:10:32:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:50:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:32:50 +0000] "POST /invocations HTTP/1.1" 200 12106 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:32:50 +0000] "POST /invocations HTTP/1.1" 200 12106 "-" "Go-http-client/1.1" [2020-05-21:10:32:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:51:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:32:50 +0000] "POST /invocations HTTP/1.1" 200 12106 "-" "Go-http-client/1.1" [2020-05-21:10:32:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:51:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:32:53 +0000] "POST /invocations HTTP/1.1" 200 12108 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:32:53 +0000] "POST /invocations HTTP/1.1" 200 12129 "-" "Go-http-client/1.1" [2020-05-21:10:32:53:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:53:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:32:53 +0000] "POST /invocations HTTP/1.1" 200 12108 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:32:53 +0000] "POST /invocations HTTP/1.1" 200 12129 "-" "Go-http-client/1.1" [2020-05-21:10:32:53:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:53:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:32:54 +0000] "POST /invocations HTTP/1.1" 200 12134 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:32:54 +0000] "POST /invocations HTTP/1.1" 200 12112 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:32:54 +0000] "POST /invocations HTTP/1.1" 200 12134 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:32:54 +0000] "POST /invocations HTTP/1.1" 200 12112 "-" "Go-http-client/1.1" [2020-05-21:10:32:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:57:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:32:57 +0000] "POST /invocations HTTP/1.1" 200 12111 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:32:58 +0000] "POST /invocations HTTP/1.1" 200 12115 "-" "Go-http-client/1.1" [2020-05-21:10:32:58:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:58:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:32:57 +0000] "POST /invocations HTTP/1.1" 200 12111 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:32:58 +0000] "POST /invocations HTTP/1.1" 200 12115 "-" "Go-http-client/1.1" [2020-05-21:10:32:58:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:32:58:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:33:00 +0000] "POST /invocations HTTP/1.1" 200 12145 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:33:00 +0000] "POST /invocations HTTP/1.1" 200 12134 "-" "Go-http-client/1.1" [2020-05-21:10:33:00:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:33:00 +0000] "POST /invocations HTTP/1.1" 200 12145 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:33:00 +0000] "POST /invocations HTTP/1.1" 200 12134 "-" "Go-http-client/1.1" [2020-05-21:10:33:00:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:33:00:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:33:01 +0000] "POST /invocations HTTP/1.1" 200 12144 "-" "Go-http-client/1.1" [2020-05-21:10:33:01:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:33:01 +0000] "POST /invocations HTTP/1.1" 200 12117 "-" "Go-http-client/1.1" [2020-05-21:10:33:00:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:33:01 +0000] "POST /invocations HTTP/1.1" 200 12144 "-" "Go-http-client/1.1" [2020-05-21:10:33:01:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:33:01 +0000] "POST /invocations HTTP/1.1" 200 12117 "-" "Go-http-client/1.1" [2020-05-21:10:33:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:33:01:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:33:04 +0000] "POST /invocations HTTP/1.1" 200 12131 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:33:04 +0000] "POST /invocations HTTP/1.1" 200 12102 "-" "Go-http-client/1.1" [2020-05-21:10:33:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:33:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:33:04 +0000] "POST /invocations HTTP/1.1" 200 12131 "-" "Go-http-client/1.1" 169.254.255.130 - - [21/May/2020:10:33:04 +0000] "POST /invocations HTTP/1.1" 200 12102 "-" "Go-http-client/1.1" [2020-05-21:10:33:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:10:33:04:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:33:05 +0000] "POST /invocations HTTP/1.1" 200 12128 "-" "Go-http-client/1.1" [2020-05-21:10:33:05:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:33:05 +0000] "POST /invocations HTTP/1.1" 200 12108 "-" "Go-http-client/1.1" [2020-05-21:10:33:05:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:33:05 +0000] "POST /invocations HTTP/1.1" 200 12128 "-" "Go-http-client/1.1" [2020-05-21:10:33:05:INFO] Determined delimiter of CSV input is ',' 169.254.255.130 - - [21/May/2020:10:33:05 +0000] "POST /invocations HTTP/1.1" 200 12108 "-" "Go-http-client/1.1" [2020-05-21:10:33:05:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/470.4 KiB (4.4 MiB/s) with 1 file(s) remaining Completed 470.4 KiB/470.4 KiB (7.7 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-west-1-019518462631/sagemaker-xgboost-2020-05-21-10-28-40-772/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "New-XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ---------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. ###Code # Make sure that we use SageMaker 1.x !pip install sagemaker==1.72.0 ###Output Collecting sagemaker==1.72.0 Downloading sagemaker-1.72.0.tar.gz (297 kB)  |████████████████████████████████| 297 kB 21.6 MB/s eta 0:00:01 [?25hRequirement already satisfied: boto3>=1.14.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.16.63) Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.19.5) Requirement already satisfied: protobuf>=3.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.14.0) Requirement already satisfied: scipy>=0.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (1.4.1) Requirement already satisfied: protobuf3-to-dict>=0.1.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (0.1.5) Requirement already satisfied: importlib-metadata>=1.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (3.4.0) Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sagemaker==1.72.0) (20.8) Collecting smdebug-rulesconfig==0.1.4 Downloading smdebug_rulesconfig-0.1.4-py2.py3-none-any.whl (10 kB) Requirement already satisfied: botocore<1.20.0,>=1.19.63 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (1.19.63) Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.3.4) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3>=1.14.12->sagemaker==1.72.0) (0.10.0) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.63->boto3>=1.14.12->sagemaker==1.72.0) (2.8.1) Requirement already satisfied: urllib3<1.27,>=1.25.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from botocore<1.20.0,>=1.19.63->boto3>=1.14.12->sagemaker==1.72.0) (1.26.2) Requirement already satisfied: typing-extensions>=3.6.4 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.7.4.3) Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata>=1.4.0->sagemaker==1.72.0) (3.4.0) Requirement already satisfied: pyparsing>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->sagemaker==1.72.0) (2.4.7) Requirement already satisfied: six>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from protobuf>=3.1->sagemaker==1.72.0) (1.15.0) Building wheels for collected packages: sagemaker Building wheel for sagemaker (setup.py) ... [?25ldone [?25h Created wheel for sagemaker: filename=sagemaker-1.72.0-py2.py3-none-any.whl size=386358 sha256=3bd76e98c0bff3ca55eed3efbb51638b14b9d9eb3bb35cec0913b961c1710e47 Stored in directory: /home/ec2-user/.cache/pip/wheels/c3/58/70/85faf4437568bfaa4c419937569ba1fe54d44c5db42406bbd7 Successfully built sagemaker Installing collected packages: smdebug-rulesconfig, sagemaker Attempting uninstall: smdebug-rulesconfig Found existing installation: smdebug-rulesconfig 1.0.1 Uninstalling smdebug-rulesconfig-1.0.1: Successfully uninstalled smdebug-rulesconfig-1.0.1 Attempting uninstall: sagemaker Found existing installation: sagemaker 2.24.1 Uninstalling sagemaker-2.24.1: Successfully uninstalled sagemaker-2.24.1 Successfully installed sagemaker-1.72.0 smdebug-rulesconfig-0.1.4 WARNING: You are using pip version 20.3.3; however, version 21.0.1 is available. You should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p36/bin/python -m pip install --upgrade pip' command. ###Markdown Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2021-02-07 20:17:15-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 23.9MB/s in 4.4s 2021-02-07 20:17:19 (18.1 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Wrote preprocessed data to cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2021-02-07 21:40:23 Starting - Starting the training job... 2021-02-07 21:40:25 Starting - Launching requested ML instances...... 2021-02-07 21:41:38 Starting - Preparing the instances for training...... 2021-02-07 21:42:43 Downloading - Downloading input data 2021-02-07 21:42:43 Training - Downloading the training image..Arguments: train [2021-02-07:21:43:05:INFO] Running standalone xgboost training. [2021-02-07:21:43:05:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8419.5mb [2021-02-07:21:43:05:INFO] Determined delimiter of CSV input is ',' [21:43:05] S3DistributionType set as FullyReplicated [21:43:07] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2021-02-07:21:43:07:INFO] Determined delimiter of CSV input is ',' [21:43:07] S3DistributionType set as FullyReplicated [21:43:08] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, 2021-02-07 21:43:04 Training - Training image download completed. Training in progress.[21:43:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.2996#011validation-error:0.3046 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [21:43:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.297267#011validation-error:0.3005 [21:43:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.282333#011validation-error:0.2827 [21:43:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [3]#011train-error:0.264333#011validation-error:0.2681 [21:43:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.267667#011validation-error:0.2681 [21:43:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 0 pruned nodes, max_depth=5 [5]#011train-error:0.2492#011validation-error:0.2541 [21:43:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [6]#011train-error:0.248667#011validation-error:0.2546 [21:43:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [7]#011train-error:0.239133#011validation-error:0.2455 [21:43:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.2344#011validation-error:0.2411 [21:43:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5 [9]#011train-error:0.2278#011validation-error:0.2347 [21:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.2222#011validation-error:0.2302 [21:43:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [11]#011train-error:0.215533#011validation-error:0.2258 [21:43:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [12]#011train-error:0.2134#011validation-error:0.2246 [21:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.209733#011validation-error:0.2176 [21:43:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [14]#011train-error:0.209733#011validation-error:0.2186 [21:43:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [15]#011train-error:0.205267#011validation-error:0.2153 [21:43:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.2004#011validation-error:0.213 [21:43:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [17]#011train-error:0.197067#011validation-error:0.2098 [21:43:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [18]#011train-error:0.1936#011validation-error:0.208 [21:43:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [19]#011train-error:0.1912#011validation-error:0.2047 [21:43:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [20]#011train-error:0.188267#011validation-error:0.2031 [21:43:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.185267#011validation-error:0.1999 [21:43:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [22]#011train-error:0.183#011validation-error:0.1987 [21:43:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.181267#011validation-error:0.1965 [21:43:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.176533#011validation-error:0.1949 [21:43:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.173733#011validation-error:0.1908 [21:43:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [26]#011train-error:0.1706#011validation-error:0.1888 [21:43:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5 [27]#011train-error:0.167667#011validation-error:0.1866 [21:43:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [28]#011train-error:0.1648#011validation-error:0.1846 [21:43:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [29]#011train-error:0.163133#011validation-error:0.184 [21:43:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [30]#011train-error:0.162467#011validation-error:0.1821 [21:43:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.161333#011validation-error:0.1818 [21:43:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [32]#011train-error:0.159#011validation-error:0.1807 [21:43:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.158533#011validation-error:0.1798 [21:43:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [34]#011train-error:0.156733#011validation-error:0.1794 [21:43:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [35]#011train-error:0.156733#011validation-error:0.178 [21:43:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [36]#011train-error:0.153533#011validation-error:0.1783 [21:43:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5 [37]#011train-error:0.152067#011validation-error:0.177 [21:44:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.151267#011validation-error:0.1753 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ................................2021-02-07T21:52:17.538:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2021-02-07 21:52:17 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-07 21:52:17 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-07 21:52:17 +0000] [1] [INFO] Using worker: gevent [2021-02-07 21:52:17 +0000] [37] [INFO] Booting worker with pid: 37 Arguments: serve [2021-02-07 21:52:17 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-07 21:52:17 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-07 21:52:17 +0000] [1] [INFO] Using worker: gevent [2021-02-07 21:52:17 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-07 21:52:17 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-07 21:52:17 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-07:21:52:17:INFO] Model loaded successfully for worker : 37 [2021-02-07 21:52:17 +0000] [40] [INFO] Booting worker with pid: 40 [2021-02-07:21:52:17:INFO] Model loaded successfully for worker : 38 [2021-02-07:21:52:17:INFO] Model loaded successfully for worker : 39 [2021-02-07:21:52:17:INFO] Model loaded successfully for worker : 40 [2021-02-07:21:52:17:INFO] Sniff delimiter as ',' [2021-02-07:21:52:17:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:18:INFO] Sniff delimiter as ',' [2021-02-07:21:52:18:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:18:INFO] Sniff delimiter as ',' [2021-02-07:21:52:18:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:18:INFO] Sniff delimiter as ',' [2021-02-07:21:52:18:INFO] Determined delimiter of CSV input is ',' [2021-02-07 21:52:17 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-07 21:52:17 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-07:21:52:17:INFO] Model loaded successfully for worker : 37 [2021-02-07 21:52:17 +0000] [40] [INFO] Booting worker with pid: 40 [2021-02-07:21:52:17:INFO] Model loaded successfully for worker : 38 [2021-02-07:21:52:17:INFO] Model loaded successfully for worker : 39 [2021-02-07:21:52:17:INFO] Model loaded successfully for worker : 40 [2021-02-07:21:52:17:INFO] Sniff delimiter as ',' [2021-02-07:21:52:17:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:18:INFO] Sniff delimiter as ',' [2021-02-07:21:52:18:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:18:INFO] Sniff delimiter as ',' [2021-02-07:21:52:18:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:18:INFO] Sniff delimiter as ',' [2021-02-07:21:52:18:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:20:INFO] Sniff delimiter as ',' [2021-02-07:21:52:20:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:20:INFO] Sniff delimiter as ',' [2021-02-07:21:52:20:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:20:INFO] Sniff delimiter as ',' [2021-02-07:21:52:20:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:20:INFO] Sniff delimiter as ',' [2021-02-07:21:52:20:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:20:INFO] Sniff delimiter as ',' [2021-02-07:21:52:20:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:20:INFO] Sniff delimiter as ',' [2021-02-07:21:52:20:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:20:INFO] Sniff delimiter as ',' [2021-02-07:21:52:20:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:20:INFO] Sniff delimiter as ',' [2021-02-07:21:52:20:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:22:INFO] Sniff delimiter as ',' [2021-02-07:21:52:22:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:23:INFO] Sniff delimiter as ',' [2021-02-07:21:52:23:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:23:INFO] Sniff delimiter as ',' [2021-02-07:21:52:23:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:23:INFO] Sniff delimiter as ',' [2021-02-07:21:52:23:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:22:INFO] Sniff delimiter as ',' [2021-02-07:21:52:22:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:23:INFO] Sniff delimiter as ',' [2021-02-07:21:52:23:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:23:INFO] Sniff delimiter as ',' [2021-02-07:21:52:23:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:23:INFO] Sniff delimiter as ',' [2021-02-07:21:52:23:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:25:INFO] Sniff delimiter as ',' [2021-02-07:21:52:25:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:25:INFO] Sniff delimiter as ',' [2021-02-07:21:52:25:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:25:INFO] Sniff delimiter as ',' [2021-02-07:21:52:25:INFO] Sniff delimiter as ',' [2021-02-07:21:52:25:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:25:INFO] Sniff delimiter as ',' [2021-02-07:21:52:25:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:25:INFO] Sniff delimiter as ',' [2021-02-07:21:52:25:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:25:INFO] Sniff delimiter as ',' [2021-02-07:21:52:25:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:25:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:25:INFO] Sniff delimiter as ',' [2021-02-07:21:52:25:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:28:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:28:INFO] Sniff delimiter as ',' [2021-02-07:21:52:28:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:28:INFO] Sniff delimiter as ',' [2021-02-07:21:52:28:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:28:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:28:INFO] Sniff delimiter as ',' [2021-02-07:21:52:28:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:28:INFO] Sniff delimiter as ',' [2021-02-07:21:52:28:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:30:INFO] Sniff delimiter as ',' [2021-02-07:21:52:30:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:30:INFO] Sniff delimiter as ',' [2021-02-07:21:52:30:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:30:INFO] Sniff delimiter as ',' [2021-02-07:21:52:30:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:30:INFO] Sniff delimiter as ',' [2021-02-07:21:52:30:INFO] Sniff delimiter as ',' [2021-02-07:21:52:30:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:30:INFO] Sniff delimiter as ',' [2021-02-07:21:52:30:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:30:INFO] Sniff delimiter as ',' [2021-02-07:21:52:30:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:30:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:30:INFO] Sniff delimiter as ',' [2021-02-07:21:52:30:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:32:INFO] Sniff delimiter as ',' [2021-02-07:21:52:32:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:32:INFO] Sniff delimiter as ',' [2021-02-07:21:52:32:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:32:INFO] Sniff delimiter as ',' [2021-02-07:21:52:32:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:32:INFO] Sniff delimiter as ',' [2021-02-07:21:52:32:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:33:INFO] Sniff delimiter as ',' [2021-02-07:21:52:33:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:32:INFO] Sniff delimiter as ',' [2021-02-07:21:52:32:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:32:INFO] Sniff delimiter as ',' [2021-02-07:21:52:32:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:33:INFO] Sniff delimiter as ',' [2021-02-07:21:52:33:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:35:INFO] Sniff delimiter as ',' [2021-02-07:21:52:35:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:35:INFO] Sniff delimiter as ',' [2021-02-07:21:52:35:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:35:INFO] Sniff delimiter as ',' [2021-02-07:21:52:35:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:35:INFO] Sniff delimiter as ',' [2021-02-07:21:52:35:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:35:INFO] Sniff delimiter as ',' [2021-02-07:21:52:35:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:35:INFO] Sniff delimiter as ',' [2021-02-07:21:52:35:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:35:INFO] Sniff delimiter as ',' [2021-02-07:21:52:35:INFO] Sniff delimiter as ',' [2021-02-07:21:52:35:INFO] Determined delimiter of CSV input is ',' [2021-02-07:21:52:35:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.3 KiB (2.4 MiB/s) with 1 file(s) remaining Completed 369.3 KiB/369.3 KiB (3.4 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-701904821656/xgboost-2021-02-07-21-47-02-210/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None # SOLUTION: vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None # SOLUTION new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv # SOLUTION: pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None # SOLUTION new_data_location = session.upload_data(os.path.join(data_dir,'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. # SOLUTION xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ..................................Arguments: serve Arguments: serve [2021-02-08 01:09:16 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-08 01:09:16 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-08 01:09:16 +0000] [1] [INFO] Using worker: gevent [2021-02-08 01:09:16 +0000] [36] [INFO] Booting worker with pid: 36 [2021-02-08 01:09:16 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-08:01:09:16:INFO] Model loaded successfully for worker : 36 [2021-02-08 01:09:16 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2021-02-08 01:09:16 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2021-02-08 01:09:16 +0000] [1] [INFO] Using worker: gevent [2021-02-08 01:09:16 +0000] [36] [INFO] Booting worker with pid: 36 [2021-02-08 01:09:16 +0000] [37] [INFO] Booting worker with pid: 37 [2021-02-08:01:09:16:INFO] Model loaded successfully for worker : 36 [2021-02-08 01:09:16 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-08:01:09:16:INFO] Model loaded successfully for worker : 37 [2021-02-08 01:09:16 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-08:01:09:16:INFO] Model loaded successfully for worker : 38 [2021-02-08:01:09:16:INFO] Model loaded successfully for worker : 39 [2021-02-08:01:09:16:INFO] Sniff delimiter as ',' [2021-02-08:01:09:16:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:17:INFO] Sniff delimiter as ',' [2021-02-08:01:09:17:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:17:INFO] Sniff delimiter as ',' [2021-02-08:01:09:17:INFO] Determined delimiter of CSV input is ',' [2021-02-08 01:09:16 +0000] [38] [INFO] Booting worker with pid: 38 [2021-02-08:01:09:16:INFO] Model loaded successfully for worker : 37 [2021-02-08 01:09:16 +0000] [39] [INFO] Booting worker with pid: 39 [2021-02-08:01:09:16:INFO] Model loaded successfully for worker : 38 [2021-02-08:01:09:16:INFO] Model loaded successfully for worker : 39 [2021-02-08:01:09:16:INFO] Sniff delimiter as ',' [2021-02-08:01:09:16:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:17:INFO] Sniff delimiter as ',' [2021-02-08:01:09:17:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:17:INFO] Sniff delimiter as ',' [2021-02-08:01:09:17:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:18:INFO] Sniff delimiter as ',' [2021-02-08:01:09:18:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:19:INFO] Sniff delimiter as ',' [2021-02-08:01:09:19:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:18:INFO] Sniff delimiter as ',' [2021-02-08:01:09:18:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:19:INFO] Sniff delimiter as ',' [2021-02-08:01:09:19:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:20:INFO] Sniff delimiter as ',' [2021-02-08:01:09:20:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:20:INFO] Sniff delimiter as ',' [2021-02-08:01:09:20:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:20:INFO] Sniff delimiter as ',' [2021-02-08:01:09:20:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:20:INFO] Sniff delimiter as ',' [2021-02-08:01:09:20:INFO] Determined delimiter of CSV input is ',' 2021-02-08T01:09:16.637:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2021-02-08:01:09:21:INFO] Sniff delimiter as ',' [2021-02-08:01:09:21:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:21:INFO] Sniff delimiter as ',' [2021-02-08:01:09:21:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:21:INFO] Sniff delimiter as ',' [2021-02-08:01:09:21:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:21:INFO] Sniff delimiter as ',' [2021-02-08:01:09:21:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:22:INFO] Sniff delimiter as ',' [2021-02-08:01:09:22:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:22:INFO] Sniff delimiter as ',' [2021-02-08:01:09:22:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:22:INFO] Sniff delimiter as ',' [2021-02-08:01:09:22:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:22:INFO] Sniff delimiter as ',' [2021-02-08:01:09:22:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:23:INFO] Sniff delimiter as ',' [2021-02-08:01:09:23:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:23:INFO] Sniff delimiter as ',' [2021-02-08:01:09:23:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:23:INFO] Sniff delimiter as ',' [2021-02-08:01:09:23:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:23:INFO] Sniff delimiter as ',' [2021-02-08:01:09:23:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:24:INFO] Sniff delimiter as ',' [2021-02-08:01:09:24:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:24:INFO] Sniff delimiter as ',' [2021-02-08:01:09:24:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:24:INFO] Sniff delimiter as ',' [2021-02-08:01:09:24:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:24:INFO] Sniff delimiter as ',' [2021-02-08:01:09:24:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:26:INFO] Sniff delimiter as ',' [2021-02-08:01:09:26:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:26:INFO] Sniff delimiter as ',' [2021-02-08:01:09:26:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:26:INFO] Sniff delimiter as ',' [2021-02-08:01:09:26:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:26:INFO] Sniff delimiter as ',' [2021-02-08:01:09:26:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:27:INFO] Sniff delimiter as ',' [2021-02-08:01:09:27:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:27:INFO] Sniff delimiter as ',' [2021-02-08:01:09:27:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:27:INFO] Sniff delimiter as ',' [2021-02-08:01:09:27:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:27:INFO] Sniff delimiter as ',' [2021-02-08:01:09:27:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:28:INFO] Sniff delimiter as ',' [2021-02-08:01:09:28:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:28:INFO] Sniff delimiter as ',' [2021-02-08:01:09:28:INFO] Sniff delimiter as ',' [2021-02-08:01:09:28:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:28:INFO] Sniff delimiter as ',' [2021-02-08:01:09:28:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:28:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:29:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:29:INFO] Sniff delimiter as ',' [2021-02-08:01:09:29:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:29:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:29:INFO] Sniff delimiter as ',' [2021-02-08:01:09:29:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:30:INFO] Sniff delimiter as ',' [2021-02-08:01:09:30:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:30:INFO] Sniff delimiter as ',' [2021-02-08:01:09:30:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:30:INFO] Sniff delimiter as ',' [2021-02-08:01:09:30:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:30:INFO] Sniff delimiter as ',' [2021-02-08:01:09:30:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:30:INFO] Sniff delimiter as ',' [2021-02-08:01:09:30:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:30:INFO] Sniff delimiter as ',' [2021-02-08:01:09:30:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:32:INFO] Sniff delimiter as ',' [2021-02-08:01:09:32:INFO] Sniff delimiter as ',' [2021-02-08:01:09:32:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:32:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:32:INFO] Sniff delimiter as ',' [2021-02-08:01:09:32:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:32:INFO] Sniff delimiter as ',' [2021-02-08:01:09:32:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:33:INFO] Sniff delimiter as ',' [2021-02-08:01:09:33:INFO] Sniff delimiter as ',' [2021-02-08:01:09:33:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:33:INFO] Sniff delimiter as ',' [2021-02-08:01:09:33:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:33:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:33:INFO] Sniff delimiter as ',' [2021-02-08:01:09:33:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:34:INFO] Sniff delimiter as ',' [2021-02-08:01:09:34:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:34:INFO] Sniff delimiter as ',' [2021-02-08:01:09:34:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:35:INFO] Sniff delimiter as ',' [2021-02-08:01:09:35:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:35:INFO] Sniff delimiter as ',' [2021-02-08:01:09:35:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:36:INFO] Sniff delimiter as ',' [2021-02-08:01:09:36:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:36:INFO] Sniff delimiter as ',' [2021-02-08:01:09:36:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:36:INFO] Sniff delimiter as ',' [2021-02-08:01:09:36:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:36:INFO] Sniff delimiter as ',' [2021-02-08:01:09:36:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:37:INFO] Sniff delimiter as ',' [2021-02-08:01:09:37:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:37:INFO] Sniff delimiter as ',' [2021-02-08:01:09:37:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:37:INFO] Sniff delimiter as ',' [2021-02-08:01:09:37:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:37:INFO] Sniff delimiter as ',' [2021-02-08:01:09:37:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:38:INFO] Sniff delimiter as ',' [2021-02-08:01:09:38:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:38:INFO] Sniff delimiter as ',' [2021-02-08:01:09:38:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:38:INFO] Sniff delimiter as ',' [2021-02-08:01:09:38:INFO] Sniff delimiter as ',' [2021-02-08:01:09:38:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:38:INFO] Sniff delimiter as ',' [2021-02-08:01:09:38:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:38:INFO] Sniff delimiter as ',' [2021-02-08:01:09:38:INFO] Determined delimiter of CSV input is ',' [2021-02-08:01:09:38:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.6 KiB (2.2 MiB/s) with 1 file(s) remaining Completed 369.6 KiB/369.6 KiB (3.1 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-701904821656/xgboost-2021-02-08-01-03-48-224/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None # SOLUTION xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['imagin', 'look', 'forward', 'king', 'ant', 'massiv', 'gordon', 'fan', 'await', 'european', 'premier', 'wick', 'anticip', 'especi', 'sinc', 'love', 'dagon', 'gordon', 'last', 'achiev', 'much', 'king', 'ant', 'premier', 'countri', 'gordon', 'came', 'present', 'unfortun', 'go', 'congratul', 'afterward', 'king', 'ant', 'uninspir', 'mediocr', 'film', 'date', 'realli', 'qualiti', 'level', 'never', 'surpass', 'ordinari', 'tv', 'thriller', 'standard', 'plot', 'outlin', 'terribl', 'routin', 'except', 'poor', 'scene', 'typic', 'gordon', 'touch', 'never', 'recogn', 'top', 'alreadi', 'weak', 'script', 'hole', 'swiss', 'bowl', 'chees', 'involv', 'young', 'wannab', 'crook', 'hire', 'commit', 'murder', 'cours', 'pay', 'cours', 'fall', 'love', 'victim', 'wife', 'cours', 'aveng', 'sequenc', 'guy', 'descent', 'spiral', 'mad', 'worth', 'mention', 'one', 'remind', 'fact', 'still', 'watch', 'stuart', 'gordon', 'film', 'act', 'perform', 'averag', 'mccenna', 'heroic', 'lowlif', 'georg', 'norm', 'peterson', 'wendt', 'chubbi', 'bastard', 'kari', 'wuhrer', 'good', 'heart', 'sex', 'bomb', 'extrem', 'illog', 'thing', 'happen', 'constantli', 'dull', 'stori', 'becom', 'irrit', 'quickli', 'make', 'effect', 'enough', 'even', 'satisfi', 'amateur', 'horror', 'fan', 'read', 'comment', 'king', 'ant', 'claim', 'gordon', 'best', 'sinc', 'final', 'thought', 'provok', 'matur', 'film', 'well', 'case', 'rather', 'stay', 'immatur', 'give', 'anim', 'anoth', 'view', 'thank', 'much', 'oh', 'well', 'guess', 'everi', 'good', 'director', 'run', 'steam', 'inspir', 'eventu', 'bad', 'also', 'overcam', 'stuart', 'gordon', 'banana'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output _____no_output_____ ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output _____no_output_____ ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-05-11 15:51:55-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 10.2MB/s in 11s 2020-05-11 15:52:07 (7.36 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Wrote preprocessed data to cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-05-11 18:20:35 Starting - Starting the training job... 2020-05-11 18:20:37 Starting - Launching requested ML instances... 2020-05-11 18:21:31 Starting - Preparing the instances for training...... 2020-05-11 18:22:30 Downloading - Downloading input data... 2020-05-11 18:22:50 Training - Downloading the training image..Arguments: train [2020-05-11:18:23:10:INFO] Running standalone xgboost training. [2020-05-11:18:23:10:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8497.93mb [2020-05-11:18:23:10:INFO] Determined delimiter of CSV input is ',' [18:23:10] S3DistributionType set as FullyReplicated [18:23:12] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-05-11:18:23:12:INFO] Determined delimiter of CSV input is ',' [18:23:12] S3DistributionType set as FullyReplicated [18:23:13] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [18:23:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [0]#011train-error:0.290667#011validation-error:0.3068 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [18:23:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [1]#011train-error:0.276#011validation-error:0.2896 [18:23:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [2]#011train-error:0.277#011validation-error:0.2914 [18:23:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 0 pruned nodes, max_depth=5 [3]#011train-error:0.266133#011validation-error:0.2785 [18:23:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [4]#011train-error:0.2604#011validation-error:0.2728 2020-05-11 18:23:09 Training - Training image download completed. Training in progress.[18:23:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [5]#011train-error:0.241733#011validation-error:0.2569 [18:23:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [6]#011train-error:0.238#011validation-error:0.2538 [18:23:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.231267#011validation-error:0.2493 [18:23:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.2242#011validation-error:0.2444 [18:23:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [9]#011train-error:0.223133#011validation-error:0.2437 [18:23:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.218067#011validation-error:0.2382 [18:23:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 16 pruned nodes, max_depth=5 [11]#011train-error:0.217133#011validation-error:0.2382 [18:23:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [12]#011train-error:0.212733#011validation-error:0.2338 [18:23:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 14 pruned nodes, max_depth=5 [13]#011train-error:0.2066#011validation-error:0.2272 [18:23:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5 [14]#011train-error:0.200267#011validation-error:0.2268 [18:23:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [15]#011train-error:0.1982#011validation-error:0.2242 [18:23:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.196133#011validation-error:0.2213 [18:23:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=5 [17]#011train-error:0.1928#011validation-error:0.2182 [18:23:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [18]#011train-error:0.191933#011validation-error:0.2152 [18:23:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [19]#011train-error:0.1888#011validation-error:0.2122 [18:23:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [20]#011train-error:0.186267#011validation-error:0.2123 [18:23:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [21]#011train-error:0.184067#011validation-error:0.2108 [18:23:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.1804#011validation-error:0.2075 [18:23:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.175933#011validation-error:0.2075 [18:23:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [24]#011train-error:0.173667#011validation-error:0.2041 [18:23:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5 [25]#011train-error:0.173733#011validation-error:0.2026 [18:23:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 12 pruned nodes, max_depth=5 [26]#011train-error:0.169867#011validation-error:0.1988 [18:23:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [27]#011train-error:0.1668#011validation-error:0.1973 [18:23:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [28]#011train-error:0.166533#011validation-error:0.1979 [18:23:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [29]#011train-error:0.165333#011validation-error:0.1952 [18:23:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [30]#011train-error:0.1634#011validation-error:0.1944 [18:23:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.1622#011validation-error:0.1941 [18:23:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [32]#011train-error:0.1592#011validation-error:0.1914 [18:23:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [33]#011train-error:0.157933#011validation-error:0.1883 [18:24:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [34]#011train-error:0.155733#011validation-error:0.1884 [18:24:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [35]#011train-error:0.154333#011validation-error:0.186 [18:24:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.153267#011validation-error:0.1843 [18:24:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.1524#011validation-error:0.1835 [18:24:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.151#011validation-error:0.1831 [18:24:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [39]#011train-error:0.149667#011validation-error:0.1828 [18:24:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [40]#011train-error:0.148867#011validation-error:0.182 [18:24:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [41]#011train-error:0.148133#011validation-error:0.1814 [18:24:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [42]#011train-error:0.147867#011validation-error:0.1813 [18:24:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [43]#011train-error:0.1466#011validation-error:0.1795 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ..................Arguments: serve [2020-05-11 18:29:12 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-11 18:29:12 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-11 18:29:12 +0000] [1] [INFO] Using worker: gevent [2020-05-11 18:29:12 +0000] [37] [INFO] Booting worker with pid: 37 [2020-05-11 18:29:12 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-11 18:29:13 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-11 18:29:13 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-11:18:29:13:INFO] Model loaded successfully for worker : 37 [2020-05-11:18:29:13:INFO] Model loaded successfully for worker : 39 [2020-05-11:18:29:13:INFO] Model loaded successfully for worker : 40 [2020-05-11:18:29:13:INFO] Model loaded successfully for worker : 38 [2020-05-11:18:29:40:INFO] Sniff delimiter as ',' [2020-05-11:18:29:40:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:40:INFO] Sniff delimiter as ',' [2020-05-11:18:29:40:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:40:INFO] Sniff delimiter as ',' [2020-05-11:18:29:40:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:40:INFO] Sniff delimiter as ',' [2020-05-11:18:29:40:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:41:INFO] Sniff delimiter as ',' [2020-05-11:18:29:41:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:41:INFO] Sniff delimiter as ',' [2020-05-11:18:29:41:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:42:INFO] Sniff delimiter as ',' [2020-05-11:18:29:42:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:42:INFO] Sniff delimiter as ',' [2020-05-11:18:29:42:INFO] Determined delimiter of CSV input is ',' 2020-05-11T18:29:38.362:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-11:18:29:43:INFO] Sniff delimiter as ',' [2020-05-11:18:29:43:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:43:INFO] Sniff delimiter as ',' [2020-05-11:18:29:43:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:43:INFO] Sniff delimiter as ',' [2020-05-11:18:29:43:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:43:INFO] Sniff delimiter as ',' [2020-05-11:18:29:43:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:43:INFO] Sniff delimiter as ',' [2020-05-11:18:29:43:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:43:INFO] Sniff delimiter as ',' [2020-05-11:18:29:43:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:45:INFO] Sniff delimiter as ',' [2020-05-11:18:29:45:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:45:INFO] Sniff delimiter as ',' [2020-05-11:18:29:45:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:46:INFO] Sniff delimiter as ',' [2020-05-11:18:29:46:INFO] Sniff delimiter as ',' [2020-05-11:18:29:46:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:46:INFO] Sniff delimiter as ',' [2020-05-11:18:29:46:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:46:INFO] Sniff delimiter as ',' [2020-05-11:18:29:46:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:46:INFO] Sniff delimiter as ',' [2020-05-11:18:29:46:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:46:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:46:INFO] Sniff delimiter as ',' [2020-05-11:18:29:46:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:46:INFO] Sniff delimiter as ',' [2020-05-11:18:29:46:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:46:INFO] Sniff delimiter as ',' [2020-05-11:18:29:46:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:49:INFO] Sniff delimiter as ',' [2020-05-11:18:29:49:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:49:INFO] Sniff delimiter as ',' [2020-05-11:18:29:49:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:49:INFO] Sniff delimiter as ',' [2020-05-11:18:29:49:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:49:INFO] Sniff delimiter as ',' [2020-05-11:18:29:49:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:49:INFO] Sniff delimiter as ',' [2020-05-11:18:29:49:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:49:INFO] Sniff delimiter as ',' [2020-05-11:18:29:49:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:49:INFO] Sniff delimiter as ',' [2020-05-11:18:29:49:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:49:INFO] Sniff delimiter as ',' [2020-05-11:18:29:49:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:51:INFO] Sniff delimiter as ',' [2020-05-11:18:29:51:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:51:INFO] Sniff delimiter as ',' [2020-05-11:18:29:51:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:51:INFO] Sniff delimiter as ',' [2020-05-11:18:29:51:INFO] Sniff delimiter as ',' [2020-05-11:18:29:51:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:51:INFO] Sniff delimiter as ',' [2020-05-11:18:29:51:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:51:INFO] Sniff delimiter as ',' [2020-05-11:18:29:51:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:51:INFO] Sniff delimiter as ',' [2020-05-11:18:29:51:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:51:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:51:INFO] Sniff delimiter as ',' [2020-05-11:18:29:51:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:53:INFO] Sniff delimiter as ',' [2020-05-11:18:29:53:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:54:INFO] Sniff delimiter as ',' [2020-05-11:18:29:54:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:53:INFO] Sniff delimiter as ',' [2020-05-11:18:29:53:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:54:INFO] Sniff delimiter as ',' [2020-05-11:18:29:54:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:54:INFO] Sniff delimiter as ',' [2020-05-11:18:29:54:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:54:INFO] Sniff delimiter as ',' [2020-05-11:18:29:54:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:54:INFO] Sniff delimiter as ',' [2020-05-11:18:29:54:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:54:INFO] Sniff delimiter as ',' [2020-05-11:18:29:54:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:56:INFO] Sniff delimiter as ',' [2020-05-11:18:29:56:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:56:INFO] Sniff delimiter as ',' [2020-05-11:18:29:56:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:56:INFO] Sniff delimiter as ',' [2020-05-11:18:29:56:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:56:INFO] Sniff delimiter as ',' [2020-05-11:18:29:56:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:56:INFO] Sniff delimiter as ',' [2020-05-11:18:29:56:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:56:INFO] Sniff delimiter as ',' [2020-05-11:18:29:56:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:58:INFO] Sniff delimiter as ',' [2020-05-11:18:29:58:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:58:INFO] Sniff delimiter as ',' [2020-05-11:18:29:58:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:59:INFO] Sniff delimiter as ',' [2020-05-11:18:29:59:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:58:INFO] Sniff delimiter as ',' [2020-05-11:18:29:58:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:58:INFO] Sniff delimiter as ',' [2020-05-11:18:29:58:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:59:INFO] Sniff delimiter as ',' [2020-05-11:18:29:59:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:59:INFO] Sniff delimiter as ',' [2020-05-11:18:29:59:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:29:59:INFO] Sniff delimiter as ',' [2020-05-11:18:29:59:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:01:INFO] Sniff delimiter as ',' [2020-05-11:18:30:01:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:01:INFO] Sniff delimiter as ',' [2020-05-11:18:30:01:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:01:INFO] Sniff delimiter as ',' [2020-05-11:18:30:01:INFO] Sniff delimiter as ',' [2020-05-11:18:30:01:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:01:INFO] Sniff delimiter as ',' [2020-05-11:18:30:01:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:01:INFO] Sniff delimiter as ',' [2020-05-11:18:30:01:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:01:INFO] Sniff delimiter as ',' [2020-05-11:18:30:01:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:01:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:01:INFO] Sniff delimiter as ',' [2020-05-11:18:30:01:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:03:INFO] Sniff delimiter as ',' [2020-05-11:18:30:03:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:03:INFO] Sniff delimiter as ',' [2020-05-11:18:30:03:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:03:INFO] Sniff delimiter as ',' [2020-05-11:18:30:03:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:03:INFO] Sniff delimiter as ',' [2020-05-11:18:30:03:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:03:INFO] Sniff delimiter as ',' [2020-05-11:18:30:03:INFO] Determined delimiter of CSV input is ',' [2020-05-11:18:30:03:INFO] Sniff delimiter as ',' [2020-05-11:18:30:03:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/368.5 KiB (2.9 MiB/s) with 1 file(s) remaining Completed 368.5 KiB/368.5 KiB (4.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-central-1-293973958717/xgboost-2020-05-11-18-26-15-593/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ...................Arguments: serve [2020-05-11 19:18:37 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-11 19:18:37 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-11 19:18:37 +0000] [1] [INFO] Using worker: gevent [2020-05-11 19:18:37 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-11 19:18:37 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-11 19:18:37 +0000] [41] [INFO] Booting worker with pid: 41 [2020-05-11 19:18:37 +0000] [42] [INFO] Booting worker with pid: 42 [2020-05-11:19:18:37:INFO] Model loaded successfully for worker : 39 [2020-05-11:19:18:37:INFO] Model loaded successfully for worker : 40 [2020-05-11:19:18:37:INFO] Model loaded successfully for worker : 41 [2020-05-11:19:18:37:INFO] Model loaded successfully for worker : 42 2020-05-11T19:18:55.258:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-11:19:18:57:INFO] Sniff delimiter as ',' [2020-05-11:19:18:57:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:18:57:INFO] Sniff delimiter as ',' [2020-05-11:19:18:57:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:18:58:INFO] Sniff delimiter as ',' [2020-05-11:19:18:58:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:18:58:INFO] Sniff delimiter as ',' [2020-05-11:19:18:58:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:18:57:INFO] Sniff delimiter as ',' [2020-05-11:19:18:57:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:18:57:INFO] Sniff delimiter as ',' [2020-05-11:19:18:57:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:18:58:INFO] Sniff delimiter as ',' [2020-05-11:19:18:58:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:18:58:INFO] Sniff delimiter as ',' [2020-05-11:19:18:58:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:00:INFO] Sniff delimiter as ',' [2020-05-11:19:19:00:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:00:INFO] Sniff delimiter as ',' [2020-05-11:19:19:00:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:00:INFO] Sniff delimiter as ',' [2020-05-11:19:19:00:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:00:INFO] Sniff delimiter as ',' [2020-05-11:19:19:00:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:00:INFO] Sniff delimiter as ',' [2020-05-11:19:19:00:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:00:INFO] Sniff delimiter as ',' [2020-05-11:19:19:00:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:00:INFO] Sniff delimiter as ',' [2020-05-11:19:19:00:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:00:INFO] Sniff delimiter as ',' [2020-05-11:19:19:00:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:02:INFO] Sniff delimiter as ',' [2020-05-11:19:19:02:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:02:INFO] Sniff delimiter as ',' [2020-05-11:19:19:02:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:03:INFO] Sniff delimiter as ',' [2020-05-11:19:19:03:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:03:INFO] Sniff delimiter as ',' [2020-05-11:19:19:03:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:02:INFO] Sniff delimiter as ',' [2020-05-11:19:19:02:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:02:INFO] Sniff delimiter as ',' [2020-05-11:19:19:02:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:03:INFO] Sniff delimiter as ',' [2020-05-11:19:19:03:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:03:INFO] Sniff delimiter as ',' [2020-05-11:19:19:03:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:04:INFO] Sniff delimiter as ',' [2020-05-11:19:19:04:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:05:INFO] Sniff delimiter as ',' [2020-05-11:19:19:05:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:04:INFO] Sniff delimiter as ',' [2020-05-11:19:19:04:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:05:INFO] Sniff delimiter as ',' [2020-05-11:19:19:05:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:05:INFO] Sniff delimiter as ',' [2020-05-11:19:19:05:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:05:INFO] Sniff delimiter as ',' [2020-05-11:19:19:05:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:05:INFO] Sniff delimiter as ',' [2020-05-11:19:19:05:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:05:INFO] Sniff delimiter as ',' [2020-05-11:19:19:05:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:08:INFO] Sniff delimiter as ',' [2020-05-11:19:19:08:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:08:INFO] Sniff delimiter as ',' [2020-05-11:19:19:08:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:08:INFO] Sniff delimiter as ',' [2020-05-11:19:19:08:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:08:INFO] Sniff delimiter as ',' [2020-05-11:19:19:08:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:09:INFO] Sniff delimiter as ',' [2020-05-11:19:19:09:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:10:INFO] Sniff delimiter as ',' [2020-05-11:19:19:10:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:09:INFO] Sniff delimiter as ',' [2020-05-11:19:19:09:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:10:INFO] Sniff delimiter as ',' [2020-05-11:19:19:10:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:10:INFO] Sniff delimiter as ',' [2020-05-11:19:19:10:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:10:INFO] Sniff delimiter as ',' [2020-05-11:19:19:10:INFO] Sniff delimiter as ',' [2020-05-11:19:19:10:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:10:INFO] Sniff delimiter as ',' [2020-05-11:19:19:10:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:10:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:12:INFO] Sniff delimiter as ',' [2020-05-11:19:19:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:12:INFO] Sniff delimiter as ',' [2020-05-11:19:19:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:12:INFO] Sniff delimiter as ',' [2020-05-11:19:19:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:12:INFO] Sniff delimiter as ',' [2020-05-11:19:19:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:12:INFO] Sniff delimiter as ',' [2020-05-11:19:19:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:13:INFO] Sniff delimiter as ',' [2020-05-11:19:19:13:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:12:INFO] Sniff delimiter as ',' [2020-05-11:19:19:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:13:INFO] Sniff delimiter as ',' [2020-05-11:19:19:13:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:14:INFO] Sniff delimiter as ',' [2020-05-11:19:19:14:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:14:INFO] Sniff delimiter as ',' [2020-05-11:19:19:14:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:15:INFO] Sniff delimiter as ',' [2020-05-11:19:19:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:15:INFO] Sniff delimiter as ',' [2020-05-11:19:19:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:15:INFO] Sniff delimiter as ',' [2020-05-11:19:19:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:15:INFO] Sniff delimiter as ',' [2020-05-11:19:19:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:15:INFO] Sniff delimiter as ',' [2020-05-11:19:19:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:15:INFO] Sniff delimiter as ',' [2020-05-11:19:19:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:17:INFO] Sniff delimiter as ',' [2020-05-11:19:19:17:INFO] Determined delimiter of CSV input is ',' [2020-05-11:19:19:17:INFO] Sniff delimiter as ',' [2020-05-11:19:19:17:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/368.6 KiB (4.0 MiB/s) with 1 file(s) remaining Completed 368.6 KiB/368.6 KiB (5.4 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-central-1-293973958717/xgboost-2020-05-11-19-15-40-001/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-05-11-18-20-35-237 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['murder', 'occur', 'texa', 'desert', 'town', 'respons', 'slight', 'novelti', 'mysteri', 'racial', 'tension', 'latter', 'realli', 'fit', 'otherwis', 'strictli', 'slasher', 'fan', 'appreci', 'gore', 'nuditi', 'two', 'convent', 'element', 'film', 'dana', 'kimmel', 'friday', '13th', 'part', '3', 'infami', 'star', 'bratti', 'quasi', 'detect', 'teen', '1', '2', 'mpaa', 'rate', 'r', 'violenc', 'gore', 'nuditi', 'languag', 'banana'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'ghetto', 'victorian', 'spill', 'playboy', '21st', 'reincarn', 'weari'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'dubiou', 'orchestr', 'masterson', 'sophi', 'omin', 'banana', 'optimist'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-05-11 19:55:14 Starting - Starting the training job... 2020-05-11 19:55:16 Starting - Launching requested ML instances... 2020-05-11 19:56:14 Starting - Preparing the instances for training...... 2020-05-11 19:57:02 Downloading - Downloading input data... 2020-05-11 19:57:42 Training - Training image download completed. Training in progress..Arguments: train [2020-05-11:19:57:43:INFO] Running standalone xgboost training. [2020-05-11:19:57:43:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8487.43mb [2020-05-11:19:57:43:INFO] Determined delimiter of CSV input is ',' [19:57:43] S3DistributionType set as FullyReplicated [19:57:45] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-05-11:19:57:45:INFO] Determined delimiter of CSV input is ',' [19:57:45] S3DistributionType set as FullyReplicated [19:57:46] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [19:57:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 2 pruned nodes, max_depth=5 [0]#011train-error:0.3068#011validation-error:0.3205 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [19:57:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [1]#011train-error:0.291267#011validation-error:0.3039 [19:57:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.2792#011validation-error:0.2939 [19:57:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.2778#011validation-error:0.2894 [19:57:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.263133#011validation-error:0.2734 [19:57:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.256333#011validation-error:0.2698 [19:57:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.248867#011validation-error:0.2655 [19:57:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [7]#011train-error:0.240133#011validation-error:0.26 [19:58:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.2336#011validation-error:0.2498 [19:58:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [9]#011train-error:0.233867#011validation-error:0.2497 [19:58:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5 [10]#011train-error:0.226533#011validation-error:0.2446 [19:58:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5 [11]#011train-error:0.2208#011validation-error:0.2397 [19:58:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.2186#011validation-error:0.2376 [19:58:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [13]#011train-error:0.217067#011validation-error:0.2362 [19:58:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [14]#011train-error:0.2132#011validation-error:0.2355 [19:58:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [15]#011train-error:0.2084#011validation-error:0.2315 [19:58:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [16]#011train-error:0.203933#011validation-error:0.228 [19:58:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [17]#011train-error:0.203133#011validation-error:0.2276 [19:58:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [18]#011train-error:0.2004#011validation-error:0.2233 [19:58:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [19]#011train-error:0.197067#011validation-error:0.2222 [19:58:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [20]#011train-error:0.195467#011validation-error:0.221 [19:58:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.194333#011validation-error:0.2198 [19:58:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.191#011validation-error:0.2179 [19:58:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [23]#011train-error:0.1886#011validation-error:0.2154 [19:58:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [24]#011train-error:0.186933#011validation-error:0.2134 [19:58:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 10 pruned nodes, max_depth=5 [25]#011train-error:0.184333#011validation-error:0.2119 [19:58:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [26]#011train-error:0.183267#011validation-error:0.2088 [19:58:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5 [27]#011train-error:0.178333#011validation-error:0.2082 [19:58:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [28]#011train-error:0.176267#011validation-error:0.2074 [19:58:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5 [29]#011train-error:0.173133#011validation-error:0.2059 [19:58:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [30]#011train-error:0.1724#011validation-error:0.2046 [19:58:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [31]#011train-error:0.173133#011validation-error:0.2037 [19:58:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.172933#011validation-error:0.2022 [19:58:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.1706#011validation-error:0.2024 [19:58:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [34]#011train-error:0.1684#011validation-error:0.2 [19:58:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.1674#011validation-error:0.1992 [19:58:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.166#011validation-error:0.1986 [19:58:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.163467#011validation-error:0.1982 [19:58:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.162267#011validation-error:0.1964 [19:58:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [39]#011train-error:0.160067#011validation-error:0.1963 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ....................Arguments: serve [2020-05-11 20:03:44 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-11 20:03:44 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-11 20:03:44 +0000] [1] [INFO] Using worker: gevent [2020-05-11 20:03:44 +0000] [37] [INFO] Booting worker with pid: 37 [2020-05-11 20:03:44 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-11 20:03:44 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-11:20:03:44:INFO] Model loaded successfully for worker : 37 [2020-05-11 20:03:44 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-11:20:03:44:INFO] Model loaded successfully for worker : 38 [2020-05-11:20:03:44:INFO] Model loaded successfully for worker : 40 [2020-05-11:20:03:44:INFO] Model loaded successfully for worker : 39 2020-05-11T20:04:02.585:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-11:20:04:05:INFO] Sniff delimiter as ',' [2020-05-11:20:04:05:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:05:INFO] Sniff delimiter as ',' [2020-05-11:20:04:05:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:05:INFO] Sniff delimiter as ',' [2020-05-11:20:04:05:INFO] Sniff delimiter as ',' [2020-05-11:20:04:05:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:05:INFO] Sniff delimiter as ',' [2020-05-11:20:04:05:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:05:INFO] Sniff delimiter as ',' [2020-05-11:20:04:05:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:05:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:06:INFO] Sniff delimiter as ',' [2020-05-11:20:04:06:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:06:INFO] Sniff delimiter as ',' [2020-05-11:20:04:06:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:07:INFO] Sniff delimiter as ',' [2020-05-11:20:04:07:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:07:INFO] Sniff delimiter as ',' [2020-05-11:20:04:07:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:07:INFO] Sniff delimiter as ',' [2020-05-11:20:04:07:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:07:INFO] Sniff delimiter as ',' [2020-05-11:20:04:07:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:08:INFO] Sniff delimiter as ',' [2020-05-11:20:04:08:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:09:INFO] Sniff delimiter as ',' [2020-05-11:20:04:09:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:08:INFO] Sniff delimiter as ',' [2020-05-11:20:04:08:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:09:INFO] Sniff delimiter as ',' [2020-05-11:20:04:09:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:09:INFO] Sniff delimiter as ',' [2020-05-11:20:04:09:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:09:INFO] Sniff delimiter as ',' [2020-05-11:20:04:09:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:11:INFO] Sniff delimiter as ',' [2020-05-11:20:04:11:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:11:INFO] Sniff delimiter as ',' [2020-05-11:20:04:11:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:12:INFO] Sniff delimiter as ',' [2020-05-11:20:04:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:12:INFO] Sniff delimiter as ',' [2020-05-11:20:04:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:12:INFO] Sniff delimiter as ',' [2020-05-11:20:04:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:12:INFO] Sniff delimiter as ',' [2020-05-11:20:04:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:12:INFO] Sniff delimiter as ',' [2020-05-11:20:04:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:12:INFO] Sniff delimiter as ',' [2020-05-11:20:04:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:12:INFO] Sniff delimiter as ',' [2020-05-11:20:04:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:12:INFO] Sniff delimiter as ',' [2020-05-11:20:04:12:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:15:INFO] Sniff delimiter as ',' [2020-05-11:20:04:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:15:INFO] Sniff delimiter as ',' [2020-05-11:20:04:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:15:INFO] Sniff delimiter as ',' [2020-05-11:20:04:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:15:INFO] Sniff delimiter as ',' [2020-05-11:20:04:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:15:INFO] Sniff delimiter as ',' [2020-05-11:20:04:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:15:INFO] Sniff delimiter as ',' [2020-05-11:20:04:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:15:INFO] Sniff delimiter as ',' [2020-05-11:20:04:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:15:INFO] Sniff delimiter as ',' [2020-05-11:20:04:15:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:17:INFO] Sniff delimiter as ',' [2020-05-11:20:04:17:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:17:INFO] Sniff delimiter as ',' [2020-05-11:20:04:17:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:17:INFO] Sniff delimiter as ',' [2020-05-11:20:04:17:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:17:INFO] Sniff delimiter as ',' [2020-05-11:20:04:17:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:17:INFO] Sniff delimiter as ',' [2020-05-11:20:04:17:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:17:INFO] Sniff delimiter as ',' [2020-05-11:20:04:17:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:17:INFO] Sniff delimiter as ',' [2020-05-11:20:04:17:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:17:INFO] Sniff delimiter as ',' [2020-05-11:20:04:17:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:19:INFO] Sniff delimiter as ',' [2020-05-11:20:04:19:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:19:INFO] Sniff delimiter as ',' [2020-05-11:20:04:19:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:20:INFO] Sniff delimiter as ',' [2020-05-11:20:04:20:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:19:INFO] Sniff delimiter as ',' [2020-05-11:20:04:19:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:19:INFO] Sniff delimiter as ',' [2020-05-11:20:04:19:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:20:INFO] Sniff delimiter as ',' [2020-05-11:20:04:20:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:22:INFO] Sniff delimiter as ',' [2020-05-11:20:04:22:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:22:INFO] Sniff delimiter as ',' [2020-05-11:20:04:22:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:22:INFO] Sniff delimiter as ',' [2020-05-11:20:04:22:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:22:INFO] Sniff delimiter as ',' [2020-05-11:20:04:22:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:24:INFO] Sniff delimiter as ',' [2020-05-11:20:04:24:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:24:INFO] Sniff delimiter as ',' [2020-05-11:20:04:24:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:24:INFO] Sniff delimiter as ',' [2020-05-11:20:04:24:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:25:INFO] Sniff delimiter as ',' [2020-05-11:20:04:25:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:24:INFO] Sniff delimiter as ',' [2020-05-11:20:04:24:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:24:INFO] Sniff delimiter as ',' [2020-05-11:20:04:24:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:24:INFO] Sniff delimiter as ',' [2020-05-11:20:04:24:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:25:INFO] Sniff delimiter as ',' [2020-05-11:20:04:25:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:27:INFO] Sniff delimiter as ',' [2020-05-11:20:04:27:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:27:INFO] Sniff delimiter as ',' [2020-05-11:20:04:27:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:27:INFO] Sniff delimiter as ',' [2020-05-11:20:04:27:INFO] Sniff delimiter as ',' [2020-05-11:20:04:27:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:27:INFO] Sniff delimiter as ',' [2020-05-11:20:04:27:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:27:INFO] Sniff delimiter as ',' [2020-05-11:20:04:27:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:27:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:27:INFO] Sniff delimiter as ',' [2020-05-11:20:04:27:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:27:INFO] Sniff delimiter as ',' [2020-05-11:20:04:27:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:29:INFO] Sniff delimiter as ',' [2020-05-11:20:04:29:INFO] Determined delimiter of CSV input is ',' [2020-05-11:20:04:29:INFO] Sniff delimiter as ',' [2020-05-11:20:04:29:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.2 KiB (2.9 MiB/s) with 1 file(s) remaining Completed 366.2 KiB/366.2 KiB (4.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-eu-central-1-293973958717/xgboost-2020-05-11-20-00-43-251/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. # Solution: new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. # new_xgb_endpoint_config_info = None # Solution: new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. # Solution: session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-09-27 20:04:41-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 21.5MB/s in 5.1s 2020-09-27 20:04:46 (15.8 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer # from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-09-27 20:07:06 Starting - Starting the training job... 2020-09-27 20:07:08 Starting - Launching requested ML instances...... 2020-09-27 20:08:20 Starting - Preparing the instances for training...... 2020-09-27 20:09:24 Downloading - Downloading input data... 2020-09-27 20:09:57 Training - Downloading the training image..Arguments: train [2020-09-27:20:10:18:INFO] Running standalone xgboost training. [2020-09-27:20:10:18:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8515.67mb [2020-09-27:20:10:18:INFO] Determined delimiter of CSV input is ',' [20:10:18] S3DistributionType set as FullyReplicated [20:10:20] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-09-27:20:10:20:INFO] Determined delimiter of CSV input is ',' [20:10:20] S3DistributionType set as FullyReplicated [20:10:21] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [20:10:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [0]#011train-error:0.279#011validation-error:0.2915 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. 2020-09-27 20:10:18 Training - Training image download completed. Training in progress.[20:10:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [1]#011train-error:0.274267#011validation-error:0.2873 [20:10:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.2708#011validation-error:0.284 [20:10:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.262533#011validation-error:0.273 [20:10:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.260133#011validation-error:0.2739 [20:10:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.254533#011validation-error:0.2701 [20:10:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [6]#011train-error:0.2478#011validation-error:0.2614 [20:10:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.240533#011validation-error:0.2569 [20:10:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [8]#011train-error:0.233267#011validation-error:0.2494 [20:10:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.2286#011validation-error:0.2462 [20:10:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [10]#011train-error:0.221533#011validation-error:0.2401 [20:10:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [11]#011train-error:0.216267#011validation-error:0.2351 [20:10:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [12]#011train-error:0.2108#011validation-error:0.232 [20:10:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [13]#011train-error:0.209#011validation-error:0.2298 [20:10:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.203333#011validation-error:0.2245 [20:10:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.199733#011validation-error:0.221 [20:10:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5 [16]#011train-error:0.195333#011validation-error:0.2186 [20:10:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [17]#011train-error:0.1928#011validation-error:0.2154 [20:10:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 16 pruned nodes, max_depth=5 [18]#011train-error:0.187533#011validation-error:0.2132 [20:10:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [19]#011train-error:0.184067#011validation-error:0.2114 [20:10:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [20]#011train-error:0.180867#011validation-error:0.2096 [20:10:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [21]#011train-error:0.179133#011validation-error:0.2076 [20:10:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [22]#011train-error:0.1772#011validation-error:0.2063 [20:10:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5 [23]#011train-error:0.175267#011validation-error:0.2029 [20:10:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [24]#011train-error:0.1718#011validation-error:0.2001 [20:10:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.168933#011validation-error:0.1984 [20:10:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [26]#011train-error:0.1686#011validation-error:0.1972 [20:11:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [27]#011train-error:0.166867#011validation-error:0.1955 [20:11:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [28]#011train-error:0.166133#011validation-error:0.1945 [20:11:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [29]#011train-error:0.164267#011validation-error:0.1937 [20:11:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [30]#011train-error:0.162733#011validation-error:0.1913 [20:11:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.161267#011validation-error:0.1892 [20:11:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.160133#011validation-error:0.1892 [20:11:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.158067#011validation-error:0.1864 [20:11:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [34]#011train-error:0.158467#011validation-error:0.187 [20:11:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.156867#011validation-error:0.1864 [20:11:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [36]#011train-error:0.156267#011validation-error:0.1878 [20:11:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.1546#011validation-error:0.1873 [20:11:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.153467#011validation-error:0.1867 [20:11:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.150867#011validation-error:0.1862 [20:11:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [40]#011train-error:0.149533#011validation-error:0.1858 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ..............................2020-09-27T21:07:16.574:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-09-27 21:07:16 +0000] [1] [INFO] Starting gunicorn 19.7.1 Arguments: serve [2020-09-27 21:07:16 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-27 21:07:16 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-27 21:07:16 +0000] [1] [INFO] Using worker: gevent [2020-09-27 21:07:16 +0000] [36] [INFO] Booting worker with pid: 36 [2020-09-27 21:07:16 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-27 21:07:16 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-27:21:07:16:INFO] Model loaded successfully for worker : 36 [2020-09-27 21:07:16 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-27:21:07:16:INFO] Model loaded successfully for worker : 37 [2020-09-27:21:07:16:INFO] Model loaded successfully for worker : 38 [2020-09-27:21:07:16:INFO] Model loaded successfully for worker : 39 [2020-09-27:21:07:16:INFO] Sniff delimiter as ',' [2020-09-27:21:07:16:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:17:INFO] Sniff delimiter as ',' [2020-09-27:21:07:17:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:17:INFO] Sniff delimiter as ',' [2020-09-27:21:07:17:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:17:INFO] Sniff delimiter as ',' [2020-09-27 21:07:16 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-27 21:07:16 +0000] [1] [INFO] Using worker: gevent [2020-09-27 21:07:16 +0000] [36] [INFO] Booting worker with pid: 36 [2020-09-27 21:07:16 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-27 21:07:16 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-27:21:07:16:INFO] Model loaded successfully for worker : 36 [2020-09-27 21:07:16 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-27:21:07:16:INFO] Model loaded successfully for worker : 37 [2020-09-27:21:07:16:INFO] Model loaded successfully for worker : 38 [2020-09-27:21:07:16:INFO] Model loaded successfully for worker : 39 [2020-09-27:21:07:16:INFO] Sniff delimiter as ',' [2020-09-27:21:07:16:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:17:INFO] Sniff delimiter as ',' [2020-09-27:21:07:17:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:17:INFO] Sniff delimiter as ',' [2020-09-27:21:07:17:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:17:INFO] Sniff delimiter as ',' [2020-09-27:21:07:17:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:17:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:19:INFO] Sniff delimiter as ',' [2020-09-27:21:07:19:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:19:INFO] Sniff delimiter as ',' [2020-09-27:21:07:19:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:19:INFO] Sniff delimiter as ',' [2020-09-27:21:07:19:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:19:INFO] Sniff delimiter as ',' [2020-09-27:21:07:19:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:19:INFO] Sniff delimiter as ',' [2020-09-27:21:07:19:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:20:INFO] Sniff delimiter as ',' [2020-09-27:21:07:20:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:19:INFO] Sniff delimiter as ',' [2020-09-27:21:07:19:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:20:INFO] Sniff delimiter as ',' [2020-09-27:21:07:20:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:21:INFO] Sniff delimiter as ',' [2020-09-27:21:07:21:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:21:INFO] Sniff delimiter as ',' [2020-09-27:21:07:21:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:22:INFO] Sniff delimiter as ',' [2020-09-27:21:07:22:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:22:INFO] Sniff delimiter as ',' [2020-09-27:21:07:22:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:21:INFO] Sniff delimiter as ',' [2020-09-27:21:07:21:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:21:INFO] Sniff delimiter as ',' [2020-09-27:21:07:21:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:22:INFO] Sniff delimiter as ',' [2020-09-27:21:07:22:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:22:INFO] Sniff delimiter as ',' [2020-09-27:21:07:22:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:24:INFO] Sniff delimiter as ',' [2020-09-27:21:07:24:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:24:INFO] Sniff delimiter as ',' [2020-09-27:21:07:24:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:24:INFO] Sniff delimiter as ',' [2020-09-27:21:07:24:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:24:INFO] Sniff delimiter as ',' [2020-09-27:21:07:24:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:24:INFO] Sniff delimiter as ',' [2020-09-27:21:07:24:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:24:INFO] Sniff delimiter as ',' [2020-09-27:21:07:24:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:25:INFO] Sniff delimiter as ',' [2020-09-27:21:07:25:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:25:INFO] Sniff delimiter as ',' [2020-09-27:21:07:25:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:26:INFO] Sniff delimiter as ',' [2020-09-27:21:07:26:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:26:INFO] Sniff delimiter as ',' [2020-09-27:21:07:26:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:27:INFO] Sniff delimiter as ',' [2020-09-27:21:07:27:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:27:INFO] Sniff delimiter as ',' [2020-09-27:21:07:27:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:26:INFO] Sniff delimiter as ',' [2020-09-27:21:07:26:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:26:INFO] Sniff delimiter as ',' [2020-09-27:21:07:26:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:27:INFO] Sniff delimiter as ',' [2020-09-27:21:07:27:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:27:INFO] Sniff delimiter as ',' [2020-09-27:21:07:27:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:31:INFO] Sniff delimiter as ',' [2020-09-27:21:07:31:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:31:INFO] Sniff delimiter as ',' [2020-09-27:21:07:31:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:31:INFO] Sniff delimiter as ',' [2020-09-27:21:07:31:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:32:INFO] Sniff delimiter as ',' [2020-09-27:21:07:32:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:32:INFO] Sniff delimiter as ',' [2020-09-27:21:07:32:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:31:INFO] Sniff delimiter as ',' [2020-09-27:21:07:31:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:32:INFO] Sniff delimiter as ',' [2020-09-27:21:07:32:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:32:INFO] Sniff delimiter as ',' [2020-09-27:21:07:32:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:34:INFO] Sniff delimiter as ',' [2020-09-27:21:07:34:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:34:INFO] Sniff delimiter as ',' [2020-09-27:21:07:34:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:34:INFO] Sniff delimiter as ',' [2020-09-27:21:07:34:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:34:INFO] Sniff delimiter as ',' [2020-09-27:21:07:34:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:34:INFO] Sniff delimiter as ',' [2020-09-27:21:07:34:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:34:INFO] Sniff delimiter as ',' [2020-09-27:21:07:34:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:34:INFO] Sniff delimiter as ',' [2020-09-27:21:07:34:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:34:INFO] Sniff delimiter as ',' [2020-09-27:21:07:34:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:36:INFO] Sniff delimiter as ',' [2020-09-27:21:07:36:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:36:INFO] Sniff delimiter as ',' [2020-09-27:21:07:36:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:36:INFO] Sniff delimiter as ',' [2020-09-27:21:07:36:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:36:INFO] Sniff delimiter as ',' [2020-09-27:21:07:36:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:37:INFO] Sniff delimiter as ',' [2020-09-27:21:07:37:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:36:INFO] Sniff delimiter as ',' [2020-09-27:21:07:36:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:36:INFO] Sniff delimiter as ',' [2020-09-27:21:07:36:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:07:37:INFO] Sniff delimiter as ',' [2020-09-27:21:07:37:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.4 KiB (3.3 MiB/s) with 1 file(s) remaining Completed 370.4 KiB/370.4 KiB (4.6 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-956613579044/xgboost-2020-09-27-20-14-56-466/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.fit_transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output Arguments: serve Arguments: serve [2020-09-27 21:13:22 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-27 21:13:22 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-27 21:13:22 +0000] [1] [INFO] Using worker: gevent [2020-09-27 21:13:22 +0000] [36] [INFO] Booting worker with pid: 36 [2020-09-27:21:13:22:INFO] Model loaded successfully for worker : 36 [2020-09-27 21:13:22 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-27 21:13:23 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-27 21:13:23 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-27:21:13:23:INFO] Model loaded successfully for worker : 37 [2020-09-27:21:13:23:INFO] Model loaded successfully for worker : 38 [2020-09-27:21:13:23:INFO] Model loaded successfully for worker : 39 [2020-09-27:21:13:23:INFO] Sniff delimiter as ',' [2020-09-27 21:13:22 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-27 21:13:22 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-27 21:13:22 +0000] [1] [INFO] Using worker: gevent [2020-09-27 21:13:22 +0000] [36] [INFO] Booting worker with pid: 36 [2020-09-27:21:13:22:INFO] Model loaded successfully for worker : 36 [2020-09-27 21:13:22 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-27 21:13:23 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-27 21:13:23 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-27:21:13:23:INFO] Model loaded successfully for worker : 37 [2020-09-27:21:13:23:INFO] Model loaded successfully for worker : 38 [2020-09-27:21:13:23:INFO] Model loaded successfully for worker : 39 [2020-09-27:21:13:23:INFO] Sniff delimiter as ',' [2020-09-27:21:13:23:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:23:INFO] Sniff delimiter as ',' [2020-09-27:21:13:23:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:23:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:23:INFO] Sniff delimiter as ',' [2020-09-27:21:13:23:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:23:INFO] Sniff delimiter as ',' [2020-09-27:21:13:23:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:23:INFO] Sniff delimiter as ',' [2020-09-27:21:13:23:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:23:INFO] Sniff delimiter as ',' [2020-09-27:21:13:23:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:23:INFO] Sniff delimiter as ',' [2020-09-27:21:13:23:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:25:INFO] Sniff delimiter as ',' [2020-09-27:21:13:25:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:26:INFO] Sniff delimiter as ',' [2020-09-27:21:13:26:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:26:INFO] Sniff delimiter as ',' [2020-09-27:21:13:26:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:25:INFO] Sniff delimiter as ',' [2020-09-27:21:13:25:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:26:INFO] Sniff delimiter as ',' [2020-09-27:21:13:26:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:26:INFO] Sniff delimiter as ',' [2020-09-27:21:13:26:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:26:INFO] Sniff delimiter as ',' [2020-09-27:21:13:26:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:26:INFO] Sniff delimiter as ',' [2020-09-27:21:13:26:INFO] Determined delimiter of CSV input is ',' 2020-09-27T21:13:22.999:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-09-27:21:13:28:INFO] Sniff delimiter as ',' [2020-09-27:21:13:28:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:28:INFO] Sniff delimiter as ',' [2020-09-27:21:13:28:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:28:INFO] Sniff delimiter as ',' [2020-09-27:21:13:28:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:28:INFO] Sniff delimiter as ',' [2020-09-27:21:13:28:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:28:INFO] Sniff delimiter as ',' [2020-09-27:21:13:28:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:29:INFO] Sniff delimiter as ',' [2020-09-27:21:13:29:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:28:INFO] Sniff delimiter as ',' [2020-09-27:21:13:28:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:29:INFO] Sniff delimiter as ',' [2020-09-27:21:13:29:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:30:INFO] Sniff delimiter as ',' [2020-09-27:21:13:30:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:31:INFO] Sniff delimiter as ',' [2020-09-27:21:13:31:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:31:INFO] Sniff delimiter as ',' [2020-09-27:21:13:31:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:30:INFO] Sniff delimiter as ',' [2020-09-27:21:13:30:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:31:INFO] Sniff delimiter as ',' [2020-09-27:21:13:31:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:31:INFO] Sniff delimiter as ',' [2020-09-27:21:13:31:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:31:INFO] Sniff delimiter as ',' [2020-09-27:21:13:31:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:31:INFO] Sniff delimiter as ',' [2020-09-27:21:13:31:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:33:INFO] Sniff delimiter as ',' [2020-09-27:21:13:33:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:33:INFO] Sniff delimiter as ',' [2020-09-27:21:13:33:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:33:INFO] Sniff delimiter as ',' [2020-09-27:21:13:33:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:33:INFO] Sniff delimiter as ',' [2020-09-27:21:13:33:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:33:INFO] Sniff delimiter as ',' [2020-09-27:21:13:33:INFO] Sniff delimiter as ',' [2020-09-27:21:13:33:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:34:INFO] Sniff delimiter as ',' [2020-09-27:21:13:34:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:33:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:34:INFO] Sniff delimiter as ',' [2020-09-27:21:13:34:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:35:INFO] Sniff delimiter as ',' [2020-09-27:21:13:35:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:35:INFO] Sniff delimiter as ',' [2020-09-27:21:13:35:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:35:INFO] Sniff delimiter as ',' [2020-09-27:21:13:35:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:36:INFO] Sniff delimiter as ',' [2020-09-27:21:13:36:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:35:INFO] Sniff delimiter as ',' [2020-09-27:21:13:35:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:36:INFO] Sniff delimiter as ',' [2020-09-27:21:13:36:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:36:INFO] Sniff delimiter as ',' [2020-09-27:21:13:36:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:36:INFO] Sniff delimiter as ',' [2020-09-27:21:13:36:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:38:INFO] Sniff delimiter as ',' [2020-09-27:21:13:38:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:38:INFO] Sniff delimiter as ',' [2020-09-27:21:13:38:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:38:INFO] Sniff delimiter as ',' [2020-09-27:21:13:38:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:38:INFO] Sniff delimiter as ',' [2020-09-27:21:13:38:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:38:INFO] Sniff delimiter as ',' [2020-09-27:21:13:38:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:38:INFO] Sniff delimiter as ',' [2020-09-27:21:13:38:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:38:INFO] Sniff delimiter as ',' [2020-09-27:21:13:38:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:38:INFO] Sniff delimiter as ',' [2020-09-27:21:13:38:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:40:INFO] Sniff delimiter as ',' [2020-09-27:21:13:40:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:40:INFO] Sniff delimiter as ',' [2020-09-27:21:13:40:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:41:INFO] Sniff delimiter as ',' [2020-09-27:21:13:41:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:40:INFO] Sniff delimiter as ',' [2020-09-27:21:13:40:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:40:INFO] Sniff delimiter as ',' [2020-09-27:21:13:40:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:41:INFO] Sniff delimiter as ',' [2020-09-27:21:13:41:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:41:INFO] Sniff delimiter as ',' [2020-09-27:21:13:41:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:41:INFO] Sniff delimiter as ',' [2020-09-27:21:13:41:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:43:INFO] Sniff delimiter as ',' [2020-09-27:21:13:43:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:43:INFO] Sniff delimiter as ',' [2020-09-27:21:13:43:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:43:INFO] Sniff delimiter as ',' [2020-09-27:21:13:43:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:43:INFO] Sniff delimiter as ',' [2020-09-27:21:13:43:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:43:INFO] Sniff delimiter as ',' [2020-09-27:21:13:43:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:43:INFO] Sniff delimiter as ',' [2020-09-27:21:13:43:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:43:INFO] Sniff delimiter as ',' [2020-09-27:21:13:43:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:43:INFO] Sniff delimiter as ',' [2020-09-27:21:13:43:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:45:INFO] Sniff delimiter as ',' [2020-09-27:21:13:45:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:45:INFO] Sniff delimiter as ',' [2020-09-27:21:13:45:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:45:INFO] Sniff delimiter as ',' [2020-09-27:21:13:45:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:46:INFO] Sniff delimiter as ',' [2020-09-27:21:13:46:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:45:INFO] Sniff delimiter as ',' [2020-09-27:21:13:45:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:45:INFO] Sniff delimiter as ',' [2020-09-27:21:13:45:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:45:INFO] Sniff delimiter as ',' [2020-09-27:21:13:45:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:13:46:INFO] Sniff delimiter as ',' [2020-09-27:21:13:46:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.1 KiB (2.8 MiB/s) with 1 file(s) remaining Completed 366.1 KiB/366.1 KiB (3.9 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-956613579044/xgboost-2020-09-27-21-08-16-025/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2020-09-27-20-07-06-714 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['seen', 'movi', 'mani', 'time', 'recent', 'read', 'book', 'movi', 'base', 'everi', 'time', 'see', 'want', 'slap', 'four', 'fact', 'clue', 'fact', 'tom', 'hank', 'charact', 'flip', 'oop', 'persona', 'oh', 'act', 'charact', 'outsid', 'game', 'session', 'fact', 'three', 'month', 'therapi', 'let', 'destroy', 'feed', 'delus', 'kind', 'peopl', 'give', 'rpg', 'bad', 'name', 'also', 'corni', 'love', 'ballad', 'music', 'done', 'cat', 'piano', 'stop', 'us', 'get', 'annoy', 'almost', 'enough', 'set', 'teeth', 'edg'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:484: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'weaker', 'hostil', 'modest', 'orchestr', 'motorcycl', 'turtl', 'dubiou', 'epitom'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'banana', 'spill', 'substanti', 'monti', 'scariest', '21st', 'bach', 'playboy'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-09-27 21:23:31 Starting - Starting the training job... 2020-09-27 21:23:33 Starting - Launching requested ML instances......... 2020-09-27 21:25:03 Starting - Preparing the instances for training... 2020-09-27 21:26:02 Downloading - Downloading input data... 2020-09-27 21:26:23 Training - Downloading the training image..Arguments: train [2020-09-27:21:26:44:INFO] Running standalone xgboost training. [2020-09-27:21:26:44:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8471.42mb [2020-09-27:21:26:44:INFO] Determined delimiter of CSV input is ',' [21:26:44] S3DistributionType set as FullyReplicated [21:26:46] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-09-27:21:26:46:INFO] Determined delimiter of CSV input is ',' [21:26:46] S3DistributionType set as FullyReplicated [21:26:47] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, 2020-09-27 21:26:43 Training - Training image download completed. Training in progress.[21:26:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 2 pruned nodes, max_depth=5 [0]#011train-error:0.292133#011validation-error:0.3065 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [21:26:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.292533#011validation-error:0.3034 [21:26:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 10 pruned nodes, max_depth=5 [2]#011train-error:0.281133#011validation-error:0.293 [21:26:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.270133#011validation-error:0.2808 [21:26:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.262533#011validation-error:0.2715 [21:26:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.260533#011validation-error:0.2687 [21:26:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [6]#011train-error:0.256267#011validation-error:0.2633 [21:27:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.249933#011validation-error:0.2604 [21:27:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5 [8]#011train-error:0.2424#011validation-error:0.2531 [21:27:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.2334#011validation-error:0.2477 [21:27:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 0 pruned nodes, max_depth=5 [10]#011train-error:0.229333#011validation-error:0.2429 [21:27:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.223667#011validation-error:0.2403 [21:27:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.219333#011validation-error:0.2366 [21:27:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [13]#011train-error:0.2142#011validation-error:0.2359 [21:27:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [14]#011train-error:0.211667#011validation-error:0.2337 [21:27:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 4 pruned nodes, max_depth=5 [15]#011train-error:0.206533#011validation-error:0.2297 [21:27:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [16]#011train-error:0.204933#011validation-error:0.2274 [21:27:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [17]#011train-error:0.202467#011validation-error:0.2275 [21:27:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [18]#011train-error:0.200067#011validation-error:0.2266 [21:27:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [19]#011train-error:0.1956#011validation-error:0.2239 [21:27:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [20]#011train-error:0.194067#011validation-error:0.2209 [21:27:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.191933#011validation-error:0.2194 [21:27:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [22]#011train-error:0.190267#011validation-error:0.2189 [21:27:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [23]#011train-error:0.187133#011validation-error:0.216 [21:27:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [24]#011train-error:0.1856#011validation-error:0.2137 [21:27:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.183067#011validation-error:0.2133 [21:27:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [26]#011train-error:0.180933#011validation-error:0.2111 [21:27:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [27]#011train-error:0.1802#011validation-error:0.2102 [21:27:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [28]#011train-error:0.180133#011validation-error:0.2102 [21:27:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.178667#011validation-error:0.209 [21:27:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.176867#011validation-error:0.2083 [21:27:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [31]#011train-error:0.177267#011validation-error:0.2059 [21:27:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [32]#011train-error:0.1738#011validation-error:0.2047 [21:27:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.173333#011validation-error:0.2047 [21:27:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [34]#011train-error:0.171733#011validation-error:0.2038 [21:27:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [35]#011train-error:0.169933#011validation-error:0.2033 [21:27:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [36]#011train-error:0.169733#011validation-error:0.2017 [21:27:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.167933#011validation-error:0.2027 [21:27:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [38]#011train-error:0.1664#011validation-error:0.2017 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output .............................2020-09-27T21:33:55.265:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-09-27 21:33:55 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-27 21:33:55 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-27 21:33:55 +0000] [1] [INFO] Using worker: gevent [2020-09-27 21:33:55 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-27 21:33:55 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-27 21:33:55 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-27 21:33:55 +0000] [40] [INFO] Booting worker with pid: 40 [2020-09-27:21:33:55:INFO] Model loaded successfully for worker : 38 [2020-09-27:21:33:55:INFO] Model loaded successfully for worker : 37 [2020-09-27:21:33:55:INFO] Model loaded successfully for worker : 39 [2020-09-27:21:33:55:INFO] Model loaded successfully for worker : 40 [2020-09-27:21:33:55:INFO] Sniff delimiter as ',' [2020-09-27:21:33:55:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:55:INFO] Sniff delimiter as ',' [2020-09-27:21:33:55:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:55:INFO] Sniff delimiter as ',' [2020-09-27:21:33:55:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:57:INFO] Sniff delimiter as ',' [2020-09-27:21:33:57:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:57:INFO] Sniff delimiter as ',' Arguments: serve [2020-09-27 21:33:55 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-27 21:33:55 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-27 21:33:55 +0000] [1] [INFO] Using worker: gevent [2020-09-27 21:33:55 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-27 21:33:55 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-27 21:33:55 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-27 21:33:55 +0000] [40] [INFO] Booting worker with pid: 40 [2020-09-27:21:33:55:INFO] Model loaded successfully for worker : 38 [2020-09-27:21:33:55:INFO] Model loaded successfully for worker : 37 [2020-09-27:21:33:55:INFO] Model loaded successfully for worker : 39 [2020-09-27:21:33:55:INFO] Model loaded successfully for worker : 40 [2020-09-27:21:33:55:INFO] Sniff delimiter as ',' [2020-09-27:21:33:55:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:55:INFO] Sniff delimiter as ',' [2020-09-27:21:33:55:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:55:INFO] Sniff delimiter as ',' [2020-09-27:21:33:55:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:57:INFO] Sniff delimiter as ',' [2020-09-27:21:33:57:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:57:INFO] Sniff delimiter as ',' [2020-09-27:21:33:57:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:57:INFO] Sniff delimiter as ',' [2020-09-27:21:33:57:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:57:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:57:INFO] Sniff delimiter as ',' [2020-09-27:21:33:57:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:58:INFO] Sniff delimiter as ',' [2020-09-27:21:33:58:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:58:INFO] Sniff delimiter as ',' [2020-09-27:21:33:58:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:59:INFO] Sniff delimiter as ',' [2020-09-27:21:33:59:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:00:INFO] Sniff delimiter as ',' [2020-09-27:21:34:00:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:00:INFO] Sniff delimiter as ',' [2020-09-27:21:34:00:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:33:59:INFO] Sniff delimiter as ',' [2020-09-27:21:33:59:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:00:INFO] Sniff delimiter as ',' [2020-09-27:21:34:00:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:00:INFO] Sniff delimiter as ',' [2020-09-27:21:34:00:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:01:INFO] Sniff delimiter as ',' [2020-09-27:21:34:01:INFO] Sniff delimiter as ',' [2020-09-27:21:34:01:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:01:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:02:INFO] Sniff delimiter as ',' [2020-09-27:21:34:02:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:02:INFO] Sniff delimiter as ',' [2020-09-27:21:34:02:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:02:INFO] Sniff delimiter as ',' [2020-09-27:21:34:02:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:02:INFO] Sniff delimiter as ',' [2020-09-27:21:34:02:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:04:INFO] Sniff delimiter as ',' [2020-09-27:21:34:04:INFO] Sniff delimiter as ',' [2020-09-27:21:34:04:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:04:INFO] Sniff delimiter as ',' [2020-09-27:21:34:04:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:04:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:04:INFO] Sniff delimiter as ',' [2020-09-27:21:34:04:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:05:INFO] Sniff delimiter as ',' [2020-09-27:21:34:05:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:05:INFO] Sniff delimiter as ',' [2020-09-27:21:34:05:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:05:INFO] Sniff delimiter as ',' [2020-09-27:21:34:05:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:05:INFO] Sniff delimiter as ',' [2020-09-27:21:34:05:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:06:INFO] Sniff delimiter as ',' [2020-09-27:21:34:06:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:06:INFO] Sniff delimiter as ',' [2020-09-27:21:34:06:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:06:INFO] Sniff delimiter as ',' [2020-09-27:21:34:06:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:06:INFO] Sniff delimiter as ',' [2020-09-27:21:34:06:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:07:INFO] Sniff delimiter as ',' [2020-09-27:21:34:07:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:07:INFO] Sniff delimiter as ',' [2020-09-27:21:34:07:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:07:INFO] Sniff delimiter as ',' [2020-09-27:21:34:07:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:07:INFO] Sniff delimiter as ',' [2020-09-27:21:34:07:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:08:INFO] Sniff delimiter as ',' [2020-09-27:21:34:08:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:08:INFO] Sniff delimiter as ',' [2020-09-27:21:34:08:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:08:INFO] Sniff delimiter as ',' [2020-09-27:21:34:08:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:08:INFO] Sniff delimiter as ',' [2020-09-27:21:34:08:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:09:INFO] Sniff delimiter as ',' [2020-09-27:21:34:09:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:09:INFO] Sniff delimiter as ',' [2020-09-27:21:34:09:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:09:INFO] Sniff delimiter as ',' [2020-09-27:21:34:09:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:09:INFO] Sniff delimiter as ',' [2020-09-27:21:34:09:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:11:INFO] Sniff delimiter as ',' [2020-09-27:21:34:11:INFO] Sniff delimiter as ',' [2020-09-27:21:34:11:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:11:INFO] Sniff delimiter as ',' [2020-09-27:21:34:11:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:11:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:11:INFO] Sniff delimiter as ',' [2020-09-27:21:34:11:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:12:INFO] Sniff delimiter as ',' [2020-09-27:21:34:12:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:12:INFO] Sniff delimiter as ',' [2020-09-27:21:34:12:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:12:INFO] Sniff delimiter as ',' [2020-09-27:21:34:12:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:12:INFO] Sniff delimiter as ',' [2020-09-27:21:34:12:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:13:INFO] Sniff delimiter as ',' [2020-09-27:21:34:13:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:13:INFO] Sniff delimiter as ',' [2020-09-27:21:34:13:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:14:INFO] Sniff delimiter as ',' [2020-09-27:21:34:14:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:14:INFO] Sniff delimiter as ',' [2020-09-27:21:34:14:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:15:INFO] Sniff delimiter as ',' [2020-09-27:21:34:15:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:15:INFO] Sniff delimiter as ',' [2020-09-27:21:34:15:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:16:INFO] Sniff delimiter as ',' [2020-09-27:21:34:16:INFO] Determined delimiter of CSV input is ',' [2020-09-27:21:34:16:INFO] Sniff delimiter as ',' [2020-09-27:21:34:16:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.3 KiB (2.5 MiB/s) with 1 file(s) remaining Completed 366.3 KiB/366.3 KiB (3.6 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-956613579044/xgboost-2020-09-27-21-29-16-325/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": model_name, "VariantName": "AllTraffic" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2019-05-22 09:32:23-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 37.1MB/s in 2.2s 2019-05-22 09:32:25 (37.1 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2019-05-22 09:47:48 Starting - Starting the training job... 2019-05-22 09:47:49 Starting - Launching requested ML instances...... 2019-05-22 09:48:55 Starting - Preparing the instances for training...... 2019-05-22 09:49:54 Downloading - Downloading input data.. Arguments: train [2019-05-22:09:50:28:INFO] Running standalone xgboost training. [2019-05-22:09:50:28:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8382.36mb [2019-05-22:09:50:28:INFO] Determined delimiter of CSV input is ',' [09:50:28] S3DistributionType set as FullyReplicated [09:50:30] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2019-05-22:09:50:30:INFO] Determined delimiter of CSV input is ',' [09:50:30] S3DistributionType set as FullyReplicated [09:50:31] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [09:50:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 0 pruned nodes, max_depth=5 [0]#011train-error:0.292667#011validation-error:0.3066 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. 2019-05-22 09:50:27 Training - Training image download completed. Training in progress.[09:50:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [1]#011train-error:0.277933#011validation-error:0.2876 [09:50:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [2]#011train-error:0.277733#011validation-error:0.2857 [09:50:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 12 pruned nodes, max_depth=5 [3]#011train-error:0.269733#011validation-error:0.2821 [09:50:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 0 pruned nodes, max_depth=5 [4]#011train-error:0.266667#011validation-error:0.2788 [09:50:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.2534#011validation-error:0.2645 [09:50:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.2392#011validation-error:0.2561 [09:50:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.2338#011validation-error:0.2521 [09:50:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [8]#011train-error:0.226467#011validation-error:0.2462 [09:50:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [9]#011train-error:0.225667#011validation-error:0.2437 [09:50:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.219867#011validation-error:0.2391 [09:50:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [11]#011train-error:0.2166#011validation-error:0.2339 [09:50:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.211#011validation-error:0.231 [09:50:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [13]#011train-error:0.207#011validation-error:0.2275 [09:50:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [14]#011train-error:0.2034#011validation-error:0.2239 [09:50:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [15]#011train-error:0.200733#011validation-error:0.2242 [09:50:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [16]#011train-error:0.194867#011validation-error:0.2191 [09:50:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [17]#011train-error:0.192#011validation-error:0.2161 [09:50:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [18]#011train-error:0.189333#011validation-error:0.2149 [09:50:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 22 pruned nodes, max_depth=5 [19]#011train-error:0.187267#011validation-error:0.2134 [09:51:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [20]#011train-error:0.184267#011validation-error:0.2112 [09:51:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [21]#011train-error:0.183267#011validation-error:0.2081 [09:51:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.180267#011validation-error:0.2059 [09:51:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [23]#011train-error:0.177533#011validation-error:0.2029 [09:51:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.1742#011validation-error:0.2005 [09:51:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.173067#011validation-error:0.198 [09:51:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [26]#011train-error:0.169267#011validation-error:0.1959 [09:51:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [27]#011train-error:0.166933#011validation-error:0.1937 [09:51:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [28]#011train-error:0.165#011validation-error:0.1933 [09:51:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [29]#011train-error:0.163#011validation-error:0.1914 [09:51:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=5 [30]#011train-error:0.162467#011validation-error:0.1891 [09:51:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5 [31]#011train-error:0.1624#011validation-error:0.1883 [09:51:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [32]#011train-error:0.159933#011validation-error:0.1889 [09:51:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [33]#011train-error:0.1584#011validation-error:0.1869 [09:51:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.1574#011validation-error:0.1846 [09:51:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [35]#011train-error:0.156533#011validation-error:0.1836 [09:51:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.155#011validation-error:0.1824 [09:51:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [37]#011train-error:0.154667#011validation-error:0.1826 [09:51:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [38]#011train-error:0.154733#011validation-error:0.1807 [09:51:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [39]#011train-error:0.152667#011validation-error:0.1799 [09:51:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [40]#011train-error:0.151#011validation-error:0.1791 [09:51:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [41]#011train-error:0.149667#011validation-error:0.1773 [09:51:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [42]#011train-error:0.15#011validation-error:0.1758 [09:51:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [43]#011train-error:0.148133#011validation-error:0.174 [09:51:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [44]#011train-error:0.144733#011validation-error:0.1736 [09:51:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [45]#011train-error:0.145267#011validation-error:0.1746 [09:51:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [46]#011train-error:0.1442#011validation-error:0.1738 [09:51:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [47]#011train-error:0.142533#011validation-error:0.1727 [09:51:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [48]#011train-error:0.1412#011validation-error:0.1716 [09:51:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [49]#011train-error:0.1408#011validation-error:0.1712 [09:51:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [50]#011train-error:0.1406#011validation-error:0.1706 [09:51:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5 [51]#011train-error:0.138733#011validation-error:0.1699 [09:51:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5 [52]#011train-error:0.139067#011validation-error:0.1686 [09:51:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [53]#011train-error:0.137467#011validation-error:0.1683 [09:51:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 18 pruned nodes, max_depth=5 [54]#011train-error:0.1358#011validation-error:0.1683 [09:51:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [55]#011train-error:0.134867#011validation-error:0.1677 [09:51:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [56]#011train-error:0.134467#011validation-error:0.1661 [09:51:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [57]#011train-error:0.133667#011validation-error:0.1659 [09:51:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [58]#011train-error:0.1322#011validation-error:0.166 [09:51:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [59]#011train-error:0.1316#011validation-error:0.1647 [09:51:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [60]#011train-error:0.1308#011validation-error:0.1625 [09:51:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [61]#011train-error:0.130333#011validation-error:0.1623 [09:51:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [62]#011train-error:0.1296#011validation-error:0.1624 [09:51:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [63]#011train-error:0.128667#011validation-error:0.1622 [09:51:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [64]#011train-error:0.126733#011validation-error:0.1614 [09:51:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5 [65]#011train-error:0.1262#011validation-error:0.1608 [09:51:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [66]#011train-error:0.1262#011validation-error:0.1609 [09:52:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [67]#011train-error:0.1258#011validation-error:0.1617 [09:52:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=5 [68]#011train-error:0.125067#011validation-error:0.1605 [09:52:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5 [69]#011train-error:0.124533#011validation-error:0.1599 [09:52:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [70]#011train-error:0.124067#011validation-error:0.1597 [09:52:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [71]#011train-error:0.123867#011validation-error:0.1604 [09:52:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [72]#011train-error:0.1226#011validation-error:0.161 [09:52:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [73]#011train-error:0.121933#011validation-error:0.1595 [09:52:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [74]#011train-error:0.1212#011validation-error:0.1598 [09:52:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [75]#011train-error:0.121733#011validation-error:0.1596 [09:52:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [76]#011train-error:0.1206#011validation-error:0.1585 [09:52:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [77]#011train-error:0.119733#011validation-error:0.1582 [09:52:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [78]#011train-error:0.119533#011validation-error:0.158 [09:52:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [79]#011train-error:0.118#011validation-error:0.1577 [09:52:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [80]#011train-error:0.1174#011validation-error:0.1572 [09:52:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [81]#011train-error:0.1176#011validation-error:0.1578 [09:52:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5 [82]#011train-error:0.117133#011validation-error:0.1579 [09:52:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [83]#011train-error:0.117067#011validation-error:0.158 [09:52:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [84]#011train-error:0.116267#011validation-error:0.1569 [09:52:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [85]#011train-error:0.116067#011validation-error:0.1563 [09:52:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [86]#011train-error:0.114867#011validation-error:0.1558 [09:52:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 22 pruned nodes, max_depth=5 [87]#011train-error:0.1152#011validation-error:0.1557 [09:52:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [88]#011train-error:0.1148#011validation-error:0.1554 [09:52:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [89]#011train-error:0.113667#011validation-error:0.1557 [09:52:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [90]#011train-error:0.112933#011validation-error:0.1548 [09:52:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5 [91]#011train-error:0.112267#011validation-error:0.1554 [09:52:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5 [92]#011train-error:0.1122#011validation-error:0.1542 [09:52:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [93]#011train-error:0.111067#011validation-error:0.1538 [09:52:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5 [94]#011train-error:0.111067#011validation-error:0.1537 [09:52:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5 [95]#011train-error:0.111467#011validation-error:0.1532 [09:52:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [96]#011train-error:0.111067#011validation-error:0.1532 [09:52:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [97]#011train-error:0.110733#011validation-error:0.1529 [09:52:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [98]#011train-error:0.1104#011validation-error:0.1532 [09:52:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [99]#011train-error:0.109867#011validation-error:0.1524 [09:52:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [100]#011train-error:0.109533#011validation-error:0.1521 [09:52:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5 [101]#011train-error:0.109467#011validation-error:0.1525 [09:52:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5 [102]#011train-error:0.1088#011validation-error:0.152 [09:52:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [103]#011train-error:0.108533#011validation-error:0.1519 [09:52:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [104]#011train-error:0.108067#011validation-error:0.1516 [09:52:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [105]#011train-error:0.1076#011validation-error:0.1511 [09:52:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [106]#011train-error:0.1072#011validation-error:0.1507 [09:52:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [107]#011train-error:0.106#011validation-error:0.1504 [09:52:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [108]#011train-error:0.1054#011validation-error:0.1495 [09:52:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [109]#011train-error:0.104867#011validation-error:0.1499 [09:52:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=5 [110]#011train-error:0.103533#011validation-error:0.1503 [09:52:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5 [111]#011train-error:0.103867#011validation-error:0.1501 [09:52:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5 [112]#011train-error:0.1038#011validation-error:0.1504 [09:52:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 20 pruned nodes, max_depth=5 [113]#011train-error:0.103533#011validation-error:0.1506 [09:53:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [114]#011train-error:0.1036#011validation-error:0.1502 [09:53:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [115]#011train-error:0.103467#011validation-error:0.1494 [09:53:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5 [116]#011train-error:0.1034#011validation-error:0.1482 [09:53:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [117]#011train-error:0.103067#011validation-error:0.1475 [09:53:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [118]#011train-error:0.1028#011validation-error:0.1479 [09:53:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5 [119]#011train-error:0.102467#011validation-error:0.1485 [09:53:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [120]#011train-error:0.102467#011validation-error:0.1478 [09:53:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [121]#011train-error:0.1018#011validation-error:0.1482 [09:53:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [122]#011train-error:0.101067#011validation-error:0.1469 [09:53:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [123]#011train-error:0.1006#011validation-error:0.1476 [09:53:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [124]#011train-error:0.1008#011validation-error:0.1477 [09:53:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 16 pruned nodes, max_depth=5 [125]#011train-error:0.100333#011validation-error:0.1474 [09:53:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [126]#011train-error:0.099933#011validation-error:0.1472 [09:53:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [127]#011train-error:0.099533#011validation-error:0.1483 [09:53:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [128]#011train-error:0.0986#011validation-error:0.1477 [09:53:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [129]#011train-error:0.098533#011validation-error:0.1468 [09:53:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [130]#011train-error:0.098067#011validation-error:0.147 [09:53:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 18 pruned nodes, max_depth=5 [131]#011train-error:0.096933#011validation-error:0.1462 [09:53:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5 [132]#011train-error:0.096333#011validation-error:0.1461 [09:53:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [133]#011train-error:0.0958#011validation-error:0.1463 [09:53:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [134]#011train-error:0.095933#011validation-error:0.1461 [09:53:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [135]#011train-error:0.0952#011validation-error:0.1462 [09:53:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [136]#011train-error:0.0948#011validation-error:0.1461 [09:53:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [137]#011train-error:0.094467#011validation-error:0.1459 [09:53:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5 [138]#011train-error:0.094133#011validation-error:0.1454 [09:53:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [139]#011train-error:0.093933#011validation-error:0.1457 [09:53:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5 [140]#011train-error:0.0938#011validation-error:0.1459 [09:53:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [141]#011train-error:0.093333#011validation-error:0.1451 [09:53:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [142]#011train-error:0.093#011validation-error:0.1445 [09:53:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5 [143]#011train-error:0.092533#011validation-error:0.1448 [09:53:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [144]#011train-error:0.092933#011validation-error:0.1437 [09:53:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=5 [145]#011train-error:0.0922#011validation-error:0.1441 [09:53:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=5 [146]#011train-error:0.092133#011validation-error:0.1442 [09:53:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 2 pruned nodes, max_depth=5 [147]#011train-error:0.091733#011validation-error:0.1448 [09:53:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [148]#011train-error:0.0914#011validation-error:0.1445 [09:53:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5 [149]#011train-error:0.0908#011validation-error:0.144 [09:53:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5 [150]#011train-error:0.090333#011validation-error:0.1442 [09:53:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5 [151]#011train-error:0.09#011validation-error:0.144 [09:53:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5 [152]#011train-error:0.089933#011validation-error:0.1439 [09:53:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 16 pruned nodes, max_depth=5 [153]#011train-error:0.0894#011validation-error:0.1444 [09:53:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [154]#011train-error:0.0894#011validation-error:0.1439 Stopping. Best iteration: [144]#011train-error:0.092933#011validation-error:0.1437  2019-05-22 09:54:02 Uploading - Uploading generated training model 2019-05-22 09:54:02 Completed - Training job completed Billable seconds: 248 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ...............................................! ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-west-2-839762460060/xgboost-2019-05-22-09-54-33-389/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.fit_transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, "new_data.csv"), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, "new_data.csv"), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type="text/csv", split_type="Line") xgb_transformer.wait() ###Output .................................................! ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-west-2-839762460060/xgboost-2019-05-22-09-54-33-389/new_data.csv.out to ../data/sentiment_update/new_data.csv.out download: s3://sagemaker-us-west-2-839762460060/xgboost-2019-05-22-09-54-33-389/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge") ###Output ---------------------------------------------------------------------------------------! ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['upon', 'straight', 'stori', 'releas', '1999', 'prais', 'david', 'lynch', 'first', 'film', 'ignor', 'regular', 'theme', 'macabr', 'surreal', 'base', 'true', 'stori', 'one', 'man', 'journey', 'visit', 'estrang', 'brother', 'john', 'deer', '66', 'mower', 'first', 'glanc', 'odd', 'stori', 'lynch', 'direct', 'yet', 'stori', 'develop', 'see', 'lynch', 'trademark', 'motif', 'come', 'lynch', 'focu', 'small', 'town', 'america', 'inhabit', 'still', 'preval', 'previou', 'effort', 'blue', 'velvet', 'twin', 'peak', 'notabl', 'differ', 'weird', 'curb', 'restrict', 'impos', 'mean', 'film', 'notabl', 'accolad', 'one', 'live', 'action', 'film', 'think', 'featur', 'g', 'rate', 'incred', 'signific', 'film', 'stand', 'evid', 'beauti', 'signific', 'famili', 'film', 'produc', 'straight', 'stori', 'first', 'featur', 'lynch', 'direct', 'hand', 'write', 'mani', 'lynch', 'devote', 'huge', 'neg', 'point', 'almost', 'univers', 'acclaim', 'overli', 'neg', 'review', 'jame', 'brundag', 'filmcrit', 'com', 'focus', 'critic', 'typic', 'lynch', 'film', 'lynch', 'struggl', 'within', 'mold', 'g', 'rate', 'stori', 'brundag', 'claim', 'protagonist', 'alvin', 'straight', 'quot', 'line', 'directli', 'confuci', 'argu', 'stori', 'weak', 'dialogu', 'even', 'wors', 'yet', 'critic', 'mani', 'read', 'film', 'whilst', 'true', 'lynch', 'sens', 'eraserhead', 'lost', 'highway', 'mulholland', 'drive', 'film', 'also', 'ador', 'straight', 'stori', 'featur', 'differ', 'side', 'lynch', 'mean', 'terribl', 'lynch', 'fan', 'import', 'separ', 'side', 'lynch', 'featur', 'narr', 'slow', 'thought', 'give', 'real', 'sens', 'protagonist', 'thought', 'travel', 'destin', 'alvin', 'constantli', 'remind', 'past', 'relationship', 'wife', 'children', 'brother', 'yet', 'particularli', 'signific', 'flashback', 'add', 'effect', 'remind', 'convers', 'grandpar', 'conclus', 'arriv', 'like', 'watch', 'boat', 'carri', 'slow', 'meander', 'river', 'beauti', 'watch', 'natur', 'landscap', 'us', 'accentu', 'togeth', 'beauti', 'soundtrack', 'angelo', 'badalamenti', 'make', 'yearn', 'go', 'america', 'perform', 'also', 'excel', 'everi', 'actor', 'believ', 'role', 'richard', 'farnsworth', 'particularli', 'excel', 'oscar', 'nomin', 'greatli', 'deserv', 'shame', 'win', 'regardless', 'howev', 'probabl', 'finest', 'swan', 'song', 'actor', 'whilst', 'straight', 'stori', 'featur', 'none', 'lynch', 'complex', 'narr', 'trademark', 'dialogu', 'film', 'fascin', 'charact', 'studi', 'get', 'old', 'come', 'highli', 'recommend', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'victorian', 'playboy', 'spill', 'reincarn', 'ghetto', '21st', 'weari'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'masterson', 'optimist', 'dubiou', 'orchestr', 'omin', 'sophi', 'banana'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. ###Code print("Words that were in the original vocabulary but not in the new vocabulary: ") for key in (original_vocabulary - new_vocabulary): print(f"{key}: {vocabulary[key]}") print("\n") print("Words that were in the new vocabulary but not in the original vocabulary: ") for key in (new_vocabulary - original_vocabulary): print(f"{key}: {new_vectorizer.vocabulary_[key]}") ###Output Words that were in the original vocabulary but not in the new vocabulary: victorian: 4776 playboy: 3353 spill: 4183 reincarn: 3654 ghetto: 1945 21st: 67 weari: 4861 Words that were in the new vocabulary but not in the original vocabulary: masterson: 2803 optimist: 3169 dubiou: 1426 orchestr: 3172 omin: 3156 sophi: 4144 banana: 424 ###Markdown (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, "new_data.csv"), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, "new_validation.csv"), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, "new_train.csv"), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, role, train_instance_count=1, train_instance_type="ml.m4.xlarge", output_path=f"s3://{session.default_bucket()}/{prefix}/output", sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type="text/csv") s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type="text/csv") # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({"train": s3_new_input_train, "validation": s3_new_input_validation}) ###Output 2019-05-22 10:34:17 Starting - Starting the training job... 2019-05-22 10:34:18 Starting - Launching requested ML instances...... 2019-05-22 10:35:23 Starting - Preparing the instances for training... 2019-05-22 10:36:17 Downloading - Downloading input data... 2019-05-22 10:36:41 Training - Downloading the training image.. Arguments: train [2019-05-22:10:36:53:INFO] Running standalone xgboost training. [2019-05-22:10:36:53:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8414.98mb [2019-05-22:10:36:53:INFO] Determined delimiter of CSV input is ',' [10:36:53] S3DistributionType set as FullyReplicated [10:36:55] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2019-05-22:10:36:55:INFO] Determined delimiter of CSV input is ',' [10:36:55] S3DistributionType set as FullyReplicated [10:36:56] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [10:37:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 2 pruned nodes, max_depth=5 [0]#011train-error:0.306933#011validation-error:0.3261 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [10:37:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 10 pruned nodes, max_depth=5 [1]#011train-error:0.292133#011validation-error:0.3053 [10:37:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.280267#011validation-error:0.2929 [10:37:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.276333#011validation-error:0.2891 2019-05-22 10:36:53 Training - Training image download completed. Training in progress.[10:37:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.270933#011validation-error:0.2863 [10:37:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 12 pruned nodes, max_depth=5 [5]#011train-error:0.260667#011validation-error:0.2806 [10:37:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.251933#011validation-error:0.2734 [10:37:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [7]#011train-error:0.250467#011validation-error:0.2688 [10:37:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [8]#011train-error:0.243133#011validation-error:0.2628 [10:37:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [9]#011train-error:0.2384#011validation-error:0.2581 [10:37:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.2356#011validation-error:0.2563 [10:37:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [11]#011train-error:0.2306#011validation-error:0.2497 [10:37:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.225667#011validation-error:0.2469 [10:37:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [13]#011train-error:0.2216#011validation-error:0.245 [10:37:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.221133#011validation-error:0.2414 [10:37:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.217133#011validation-error:0.2387 [10:37:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.212133#011validation-error:0.234 [10:37:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [17]#011train-error:0.2092#011validation-error:0.2336 [10:37:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.205333#011validation-error:0.2298 [10:37:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.2016#011validation-error:0.2272 [10:37:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [20]#011train-error:0.1966#011validation-error:0.2249 [10:37:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.194267#011validation-error:0.2229 [10:37:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.192067#011validation-error:0.2207 [10:37:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [23]#011train-error:0.189733#011validation-error:0.2177 [10:37:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [24]#011train-error:0.189333#011validation-error:0.2171 [10:37:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [25]#011train-error:0.1876#011validation-error:0.2152 [10:37:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [26]#011train-error:0.186#011validation-error:0.2113 [10:37:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [27]#011train-error:0.184667#011validation-error:0.2102 [10:37:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [28]#011train-error:0.181733#011validation-error:0.2074 [10:37:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.181#011validation-error:0.2071 [10:37:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [30]#011train-error:0.179333#011validation-error:0.2056 [10:37:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.177067#011validation-error:0.2021 [10:37:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.174667#011validation-error:0.201 [10:37:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [33]#011train-error:0.172067#011validation-error:0.2001 [10:37:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.17#011validation-error:0.1983 [10:37:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [35]#011train-error:0.170333#011validation-error:0.1972 [10:37:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.169267#011validation-error:0.1973 [10:37:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5 [37]#011train-error:0.167667#011validation-error:0.1971 [10:37:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [38]#011train-error:0.166733#011validation-error:0.1953 [10:37:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [39]#011train-error:0.1646#011validation-error:0.1954 [10:37:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [40]#011train-error:0.1642#011validation-error:0.1948 [10:37:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [41]#011train-error:0.163533#011validation-error:0.1945 [10:37:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [42]#011train-error:0.162467#011validation-error:0.1944 [10:37:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [43]#011train-error:0.161067#011validation-error:0.196 [10:37:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [44]#011train-error:0.158933#011validation-error:0.1925 [10:37:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [45]#011train-error:0.1596#011validation-error:0.1914 [10:37:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [46]#011train-error:0.159133#011validation-error:0.1912 [10:38:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [47]#011train-error:0.157867#011validation-error:0.1889 [10:38:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [48]#011train-error:0.156133#011validation-error:0.1883 [10:38:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [49]#011train-error:0.156533#011validation-error:0.1881 [10:38:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [50]#011train-error:0.156533#011validation-error:0.1872 [10:38:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [51]#011train-error:0.155867#011validation-error:0.1866 [10:38:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [52]#011train-error:0.155067#011validation-error:0.1864 [10:38:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [53]#011train-error:0.1544#011validation-error:0.1864 [10:38:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 16 pruned nodes, max_depth=5 [54]#011train-error:0.1532#011validation-error:0.1861 [10:38:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5 [55]#011train-error:0.152333#011validation-error:0.1868 [10:38:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5 [56]#011train-error:0.1508#011validation-error:0.1855 [10:38:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [57]#011train-error:0.151267#011validation-error:0.1856 [10:38:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [58]#011train-error:0.1498#011validation-error:0.1851 [10:38:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [59]#011train-error:0.149533#011validation-error:0.1855 [10:38:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [60]#011train-error:0.1492#011validation-error:0.1849 [10:38:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [61]#011train-error:0.146867#011validation-error:0.1846 [10:38:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [62]#011train-error:0.146067#011validation-error:0.1851 [10:38:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 2 pruned nodes, max_depth=5 [63]#011train-error:0.145733#011validation-error:0.1842 [10:38:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [64]#011train-error:0.143267#011validation-error:0.1838 [10:38:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [65]#011train-error:0.142867#011validation-error:0.1837 [10:38:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [66]#011train-error:0.142467#011validation-error:0.183 [10:38:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [67]#011train-error:0.1424#011validation-error:0.1841 [10:38:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [68]#011train-error:0.1414#011validation-error:0.1842 [10:38:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [69]#011train-error:0.141267#011validation-error:0.1841 [10:38:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [70]#011train-error:0.141667#011validation-error:0.1831 [10:38:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [71]#011train-error:0.14#011validation-error:0.1836 [10:38:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [72]#011train-error:0.139#011validation-error:0.1841 [10:38:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [73]#011train-error:0.138867#011validation-error:0.184 [10:38:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [74]#011train-error:0.138267#011validation-error:0.1843 [10:38:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [75]#011train-error:0.138067#011validation-error:0.1844 2019-05-22 10:38:46 Uploading - Uploading generated training model 2019-05-22 10:38:46 Completed - Training job completed [10:38:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 2 pruned nodes, max_depth=5 [76]#011train-error:0.137867#011validation-error:0.1841 Stopping. Best iteration: [66]#011train-error:0.142467#011validation-error:0.183  Billable seconds: 150 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count=1, instance_type="ml.m4.xlarge") ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type="text/csv", split_type="Line") new_xgb_transformer.wait() ###Output ..............................................! ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-west-2-839762460060/xgboost-2019-05-22-10-39-35-068/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_model_name = new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "new-xgb-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output -------------------------------------------------------------------------------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output --2020-09-01 15:14:55-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 33.7MB/s in 2.4s 2020-09-01 15:14:58 (33.7 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Wrote preprocessed data to cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-09-01 15:44:26 Starting - Starting the training job... 2020-09-01 15:44:28 Starting - Launching requested ML instances...... 2020-09-01 15:45:49 Starting - Preparing the instances for training...... 2020-09-01 15:46:57 Downloading - Downloading input data 2020-09-01 15:46:57 Training - Downloading the training image... 2020-09-01 15:47:16 Training - Training image download completed. Training in progress.Arguments: train [2020-09-01:15:47:16:INFO] Running standalone xgboost training. [2020-09-01:15:47:17:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8502.93mb [2020-09-01:15:47:17:INFO] Determined delimiter of CSV input is ',' [15:47:16] S3DistributionType set as FullyReplicated [15:47:18] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-09-01:15:47:18:INFO] Determined delimiter of CSV input is ',' [15:47:18] S3DistributionType set as FullyReplicated [15:47:19] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [15:47:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [0]#011train-error:0.297667#011validation-error:0.3048 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [15:47:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.2804#011validation-error:0.2804 [15:47:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [2]#011train-error:0.280333#011validation-error:0.2813 [15:47:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.2674#011validation-error:0.2687 [15:47:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [4]#011train-error:0.258333#011validation-error:0.2628 [15:47:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.243467#011validation-error:0.2504 [15:47:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [6]#011train-error:0.248133#011validation-error:0.2545 [15:47:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.239133#011validation-error:0.2457 [15:47:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 0 pruned nodes, max_depth=5 [8]#011train-error:0.233333#011validation-error:0.2408 [15:47:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [9]#011train-error:0.2282#011validation-error:0.235 [15:47:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.220467#011validation-error:0.2299 [15:47:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.214867#011validation-error:0.2251 [15:47:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [12]#011train-error:0.211867#011validation-error:0.2219 [15:47:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [13]#011train-error:0.209867#011validation-error:0.22 [15:47:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.205667#011validation-error:0.2155 [15:47:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [15]#011train-error:0.2004#011validation-error:0.2109 [15:47:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [16]#011train-error:0.197133#011validation-error:0.2094 [15:47:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [17]#011train-error:0.1944#011validation-error:0.2067 [15:47:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.191067#011validation-error:0.205 [15:47:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [19]#011train-error:0.188533#011validation-error:0.2025 [15:47:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [20]#011train-error:0.186#011validation-error:0.1997 [15:47:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.185133#011validation-error:0.1977 [15:47:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [22]#011train-error:0.1832#011validation-error:0.1963 [15:47:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.181067#011validation-error:0.1942 [15:47:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [24]#011train-error:0.177267#011validation-error:0.1924 [15:47:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 0 pruned nodes, max_depth=5 [25]#011train-error:0.1738#011validation-error:0.1893 [15:47:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [26]#011train-error:0.173867#011validation-error:0.1891 [15:47:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [27]#011train-error:0.1714#011validation-error:0.1898 [15:48:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [28]#011train-error:0.17#011validation-error:0.1882 [15:48:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [29]#011train-error:0.167133#011validation-error:0.1848 [15:48:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [30]#011train-error:0.1646#011validation-error:0.1842 [15:48:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [31]#011train-error:0.1638#011validation-error:0.182 [15:48:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.161333#011validation-error:0.1804 [15:48:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [33]#011train-error:0.158933#011validation-error:0.1798 [15:48:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [34]#011train-error:0.1576#011validation-error:0.179 [15:48:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.1572#011validation-error:0.1784 [15:48:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.156#011validation-error:0.1758 [15:48:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 16 pruned nodes, max_depth=5 [37]#011train-error:0.154067#011validation-error:0.1761 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output 2020-09-01T15:57:07.388:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve Arguments: serve [2020-09-01 15:57:07 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-01 15:57:07 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-01 15:57:07 +0000] [1] [INFO] Using worker: gevent [2020-09-01 15:57:07 +0000] [36] [INFO] Booting worker with pid: 36 [2020-09-01 15:57:07 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-01:15:57:07:INFO] Model loaded successfully for worker : 36 [2020-09-01 15:57:07 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-01:15:57:07:INFO] Model loaded successfully for worker : 37 [2020-09-01 15:57:07 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-01:15:57:07:INFO] Model loaded successfully for worker : 38 [2020-09-01:15:57:07:INFO] Model loaded successfully for worker : 39 [2020-09-01:15:57:07:INFO] Sniff delimiter as ',' [2020-09-01:15:57:07:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:07:INFO] Sniff delimiter as ',' [2020-09-01:15:57:07:INFO] Determined delimiter of CSV input is ',' [2020-09-01 15:57:07 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-01 15:57:07 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-01 15:57:07 +0000] [1] [INFO] Using worker: gevent [2020-09-01 15:57:07 +0000] [36] [INFO] Booting worker with pid: 36 [2020-09-01 15:57:07 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-01:15:57:07:INFO] Model loaded successfully for worker : 36 [2020-09-01 15:57:07 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-01:15:57:07:INFO] Model loaded successfully for worker : 37 [2020-09-01 15:57:07 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-01:15:57:07:INFO] Model loaded successfully for worker : 38 [2020-09-01:15:57:07:INFO] Model loaded successfully for worker : 39 [2020-09-01:15:57:07:INFO] Sniff delimiter as ',' [2020-09-01:15:57:07:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:07:INFO] Sniff delimiter as ',' [2020-09-01:15:57:07:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:08:INFO] Sniff delimiter as ',' [2020-09-01:15:57:08:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:08:INFO] Sniff delimiter as ',' [2020-09-01:15:57:08:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:08:INFO] Sniff delimiter as ',' [2020-09-01:15:57:08:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:08:INFO] Sniff delimiter as ',' [2020-09-01:15:57:08:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:10:INFO] Sniff delimiter as ',' [2020-09-01:15:57:10:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:10:INFO] Sniff delimiter as ',' [2020-09-01:15:57:10:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:10:INFO] Sniff delimiter as ',' [2020-09-01:15:57:10:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:10:INFO] Sniff delimiter as ',' [2020-09-01:15:57:10:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:10:INFO] Sniff delimiter as ',' [2020-09-01:15:57:10:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:10:INFO] Sniff delimiter as ',' [2020-09-01:15:57:10:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:10:INFO] Sniff delimiter as ',' [2020-09-01:15:57:10:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:10:INFO] Sniff delimiter as ',' [2020-09-01:15:57:10:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:12:INFO] Sniff delimiter as ',' [2020-09-01:15:57:12:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:12:INFO] Sniff delimiter as ',' [2020-09-01:15:57:12:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:12:INFO] Sniff delimiter as ',' [2020-09-01:15:57:12:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:12:INFO] Sniff delimiter as ',' [2020-09-01:15:57:12:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:12:INFO] Sniff delimiter as ',' [2020-09-01:15:57:12:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:12:INFO] Sniff delimiter as ',' [2020-09-01:15:57:12:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:13:INFO] Sniff delimiter as ',' [2020-09-01:15:57:13:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:13:INFO] Sniff delimiter as ',' [2020-09-01:15:57:13:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:15:INFO] Sniff delimiter as ',' [2020-09-01:15:57:15:INFO] Sniff delimiter as ',' [2020-09-01:15:57:15:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:15:INFO] Sniff delimiter as ',' [2020-09-01:15:57:15:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:15:INFO] Sniff delimiter as ',' [2020-09-01:15:57:15:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:15:INFO] Sniff delimiter as ',' [2020-09-01:15:57:15:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:15:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:15:INFO] Sniff delimiter as ',' [2020-09-01:15:57:15:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:15:INFO] Sniff delimiter as ',' [2020-09-01:15:57:15:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:15:INFO] Sniff delimiter as ',' [2020-09-01:15:57:15:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:17:INFO] Sniff delimiter as ',' [2020-09-01:15:57:17:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:17:INFO] Sniff delimiter as ',' [2020-09-01:15:57:17:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:17:INFO] Sniff delimiter as ',' [2020-09-01:15:57:17:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:17:INFO] Sniff delimiter as ',' [2020-09-01:15:57:17:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:17:INFO] Sniff delimiter as ',' [2020-09-01:15:57:17:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:17:INFO] Sniff delimiter as ',' [2020-09-01:15:57:17:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:18:INFO] Sniff delimiter as ',' [2020-09-01:15:57:18:INFO] Sniff delimiter as ',' [2020-09-01:15:57:18:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:18:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:20:INFO] Sniff delimiter as ',' [2020-09-01:15:57:20:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:20:INFO] Sniff delimiter as ',' [2020-09-01:15:57:20:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:20:INFO] Sniff delimiter as ',' [2020-09-01:15:57:20:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:20:INFO] Sniff delimiter as ',' [2020-09-01:15:57:20:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:20:INFO] Sniff delimiter as ',' [2020-09-01:15:57:20:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:20:INFO] Sniff delimiter as ',' [2020-09-01:15:57:20:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:20:INFO] Sniff delimiter as ',' [2020-09-01:15:57:20:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:20:INFO] Sniff delimiter as ',' [2020-09-01:15:57:20:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:22:INFO] Sniff delimiter as ',' [2020-09-01:15:57:22:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:22:INFO] Sniff delimiter as ',' [2020-09-01:15:57:22:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:22:INFO] Sniff delimiter as ',' [2020-09-01:15:57:22:INFO] Sniff delimiter as ',' [2020-09-01:15:57:22:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:22:INFO] Sniff delimiter as ',' [2020-09-01:15:57:22:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:22:INFO] Sniff delimiter as ',' [2020-09-01:15:57:22:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:22:INFO] Sniff delimiter as ',' [2020-09-01:15:57:22:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:22:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:22:INFO] Sniff delimiter as ',' [2020-09-01:15:57:22:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:24:INFO] Sniff delimiter as ',' [2020-09-01:15:57:24:INFO] Sniff delimiter as ',' [2020-09-01:15:57:24:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:24:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:25:INFO] Sniff delimiter as ',' [2020-09-01:15:57:25:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:25:INFO] Sniff delimiter as ',' [2020-09-01:15:57:25:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:25:INFO] Sniff delimiter as ',' [2020-09-01:15:57:25:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:25:INFO] Sniff delimiter as ',' [2020-09-01:15:57:25:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:25:INFO] Sniff delimiter as ',' [2020-09-01:15:57:25:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:25:INFO] Sniff delimiter as ',' [2020-09-01:15:57:25:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:27:INFO] Sniff delimiter as ',' [2020-09-01:15:57:27:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:27:INFO] Sniff delimiter as ',' [2020-09-01:15:57:27:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:27:INFO] Sniff delimiter as ',' [2020-09-01:15:57:27:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:27:INFO] Sniff delimiter as ',' [2020-09-01:15:57:27:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:27:INFO] Sniff delimiter as ',' [2020-09-01:15:57:27:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:27:INFO] Sniff delimiter as ',' [2020-09-01:15:57:27:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:27:INFO] Sniff delimiter as ',' [2020-09-01:15:57:27:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:27:INFO] Sniff delimiter as ',' [2020-09-01:15:57:27:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:29:INFO] Sniff delimiter as ',' [2020-09-01:15:57:29:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:29:INFO] Sniff delimiter as ',' [2020-09-01:15:57:29:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:29:INFO] Sniff delimiter as ',' [2020-09-01:15:57:29:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:29:INFO] Sniff delimiter as ',' [2020-09-01:15:57:29:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:30:INFO] Sniff delimiter as ',' [2020-09-01:15:57:30:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:30:INFO] Sniff delimiter as ',' [2020-09-01:15:57:30:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:30:INFO] Sniff delimiter as ',' [2020-09-01:15:57:30:INFO] Determined delimiter of CSV input is ',' [2020-09-01:15:57:30:INFO] Sniff delimiter as ',' [2020-09-01:15:57:30:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.3 KiB (2.2 MiB/s) with 1 file(s) remaining Completed 370.3 KiB/370.3 KiB (3.1 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-668015539882/xgboost-2020-09-01-15-52-41-053/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.fit_transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix = prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2020-09-01-15-44-26-334 ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.7 KiB (4.2 MiB/s) with 1 file(s) remaining Completed 370.7 KiB/370.7 KiB (6.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-668015539882/xgboost-2020-09-01-16-30-39-381/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2020-09-01-15-44-26-334 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['carri', 'matron', 'releas', '1972', 'becom', 'clear', 'seri', 'reach', 'natur', 'end', 'best', 'entri', 'like', 'cleo', 'kyber', 'scream', 'mid', 'late', '60', 'matron', 'mean', 'bad', 'seen', 'thin', 'plot', 'bunch', 'spiv', 'tri', 'break', 'hospit', 'steal', 'suppli', 'contracept', 'pill', 'plan', 'sell', 'third', 'world', 'countri', 'surround', 'gag', 'slightli', 'amus', 'though', 'unsophist', 'natur', 'think', 'problem', 'lie', 'gag', 'amus', 'unsophist', 'natur', 'start', 'show', 'age', 'need', 'anoth', 'movi', 'use', 'man', 'dress', 'woman', 'order', 'drive', 'plot', 'perhap', 'worst', 'critic', 'make', 'saw', 'carri', 'matron', 'afternoon', 'less', 'twelv', 'hour', 'ago', 'problem', 'tri', 'rememb', 'funni', 'line', 'seriou', 'problem', 'comedi', 'banana'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:484: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'dubiou', 'mice', 'epitom', 'growth', '21st', 'spill', 'rapidli', 'masterson'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'detach', 'asset', 'banana', 'verg', 'hackman', 'profil', 'weaker', 'playboy'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix = prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix = prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix = prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, role, train_instance_count=1, train_instance_type='ml.m4.xlarge', output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train':s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-09-01 17:35:08 Starting - Starting the training job... 2020-09-01 17:35:09 Starting - Launching requested ML instances...... 2020-09-01 17:36:34 Starting - Preparing the instances for training...... 2020-09-01 17:37:27 Downloading - Downloading input data... 2020-09-01 17:38:00 Training - Downloading the training image..Arguments: train [2020-09-01:17:38:20:INFO] Running standalone xgboost training. [2020-09-01:17:38:20:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8462.8mb [2020-09-01:17:38:20:INFO] Determined delimiter of CSV input is ',' [17:38:20] S3DistributionType set as FullyReplicated [17:38:22] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-09-01:17:38:22:INFO] Determined delimiter of CSV input is ',' [17:38:22] S3DistributionType set as FullyReplicated [17:38:23] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, 2020-09-01 17:38:20 Training - Training image download completed. Training in progress.[17:38:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [0]#011train-error:0.313933#011validation-error:0.3102 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [17:38:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [1]#011train-error:0.2962#011validation-error:0.2953 [17:38:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [2]#011train-error:0.285267#011validation-error:0.2819 [17:38:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.282#011validation-error:0.2786 [17:38:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.273467#011validation-error:0.2721 [17:38:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.2632#011validation-error:0.2627 [17:38:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [6]#011train-error:0.242467#011validation-error:0.246 [17:38:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.245467#011validation-error:0.2508 [17:38:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 2 pruned nodes, max_depth=5 [8]#011train-error:0.232867#011validation-error:0.236 [17:38:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [9]#011train-error:0.229333#011validation-error:0.2319 [17:38:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.224267#011validation-error:0.2303 [17:38:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [11]#011train-error:0.220667#011validation-error:0.2261 [17:38:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [12]#011train-error:0.215933#011validation-error:0.2209 [17:38:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [13]#011train-error:0.211267#011validation-error:0.2183 [17:38:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [14]#011train-error:0.2076#011validation-error:0.2173 [17:38:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.206333#011validation-error:0.2172 [17:38:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [16]#011train-error:0.204933#011validation-error:0.2148 [17:38:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [17]#011train-error:0.203133#011validation-error:0.2148 [17:38:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [18]#011train-error:0.1994#011validation-error:0.2101 [17:38:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [19]#011train-error:0.196467#011validation-error:0.2084 [17:38:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5 [20]#011train-error:0.196133#011validation-error:0.2093 [17:38:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [21]#011train-error:0.1936#011validation-error:0.2092 [17:38:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.191333#011validation-error:0.2058 [17:38:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.189333#011validation-error:0.2033 [17:38:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [24]#011train-error:0.1878#011validation-error:0.2012 [17:38:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.187133#011validation-error:0.2021 [17:39:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [26]#011train-error:0.186467#011validation-error:0.2012 [17:39:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [27]#011train-error:0.186067#011validation-error:0.2004 [17:39:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [28]#011train-error:0.182933#011validation-error:0.197 [17:39:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [29]#011train-error:0.181267#011validation-error:0.1955 [17:39:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [30]#011train-error:0.1806#011validation-error:0.1962 [17:39:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.1786#011validation-error:0.1944 [17:39:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.176933#011validation-error:0.1956 [17:39:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [33]#011train-error:0.1756#011validation-error:0.1939 [17:39:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [34]#011train-error:0.174467#011validation-error:0.191 [17:39:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [35]#011train-error:0.173133#011validation-error:0.1906 [17:39:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.1722#011validation-error:0.1902 [17:39:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [37]#011train-error:0.171067#011validation-error:0.1895 [17:39:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.170733#011validation-error:0.1869 [17:39:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.168933#011validation-error:0.1857 [17:39:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5 [40]#011train-error:0.1672#011validation-error:0.1852 [17:39:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [41]#011train-error:0.166#011validation-error:0.1858 [17:39:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [42]#011train-error:0.164867#011validation-error:0.1852 [17:39:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [43]#011train-error:0.164267#011validation-error:0.1859 [17:39:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [44]#011train-error:0.163333#011validation-error:0.185 [17:39:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [45]#011train-error:0.161133#011validation-error:0.1857 [17:39:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [46]#011train-error:0.160067#011validation-error:0.1832 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ..............................2020-09-01T18:15:58.353:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-09-01 18:15:58 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-01 18:15:58 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-01 18:15:58 +0000] [1] [INFO] Using worker: gevent [2020-09-01 18:15:58 +0000] [36] [INFO] Booting worker with pid: 36 [2020-09-01 18:15:58 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-01:18:15:58:INFO] Model loaded successfully for worker : 36 [2020-09-01 18:15:58 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-01:18:15:58:INFO] Model loaded successfully for worker : 37 [2020-09-01 18:15:58 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-01:18:15:58:INFO] Model loaded successfully for worker : 38 [2020-09-01:18:15:58:INFO] Model loaded successfully for worker : 39 Arguments: serve [2020-09-01 18:15:58 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-09-01 18:15:58 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-09-01 18:15:58 +0000] [1] [INFO] Using worker: gevent [2020-09-01 18:15:58 +0000] [36] [INFO] Booting worker with pid: 36 [2020-09-01 18:15:58 +0000] [37] [INFO] Booting worker with pid: 37 [2020-09-01:18:15:58:INFO] Model loaded successfully for worker : 36 [2020-09-01 18:15:58 +0000] [38] [INFO] Booting worker with pid: 38 [2020-09-01:18:15:58:INFO] Model loaded successfully for worker : 37 [2020-09-01 18:15:58 +0000] [39] [INFO] Booting worker with pid: 39 [2020-09-01:18:15:58:INFO] Model loaded successfully for worker : 38 [2020-09-01:18:15:58:INFO] Model loaded successfully for worker : 39 [2020-09-01:18:15:58:INFO] Sniff delimiter as ',' [2020-09-01:18:15:58:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:15:58:INFO] Sniff delimiter as ',' [2020-09-01:18:15:58:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:15:58:INFO] Sniff delimiter as ',' [2020-09-01:18:15:58:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:15:59:INFO] Sniff delimiter as ',' [2020-09-01:18:15:59:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:01:INFO] Sniff delimiter as ',' [2020-09-01:18:16:01:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:01:INFO] Sniff delimiter as ',' [2020-09-01:18:16:01:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:01:INFO] Sniff delimiter as ',' [2020-09-01:18:16:01:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:15:58:INFO] Sniff delimiter as ',' [2020-09-01:18:15:58:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:15:58:INFO] Sniff delimiter as ',' [2020-09-01:18:15:58:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:15:58:INFO] Sniff delimiter as ',' [2020-09-01:18:15:58:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:15:59:INFO] Sniff delimiter as ',' [2020-09-01:18:15:59:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:01:INFO] Sniff delimiter as ',' [2020-09-01:18:16:01:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:01:INFO] Sniff delimiter as ',' [2020-09-01:18:16:01:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:01:INFO] Sniff delimiter as ',' [2020-09-01:18:16:01:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:01:INFO] Sniff delimiter as ',' [2020-09-01:18:16:01:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:01:INFO] Sniff delimiter as ',' [2020-09-01:18:16:01:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:03:INFO] Sniff delimiter as ',' [2020-09-01:18:16:03:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:03:INFO] Sniff delimiter as ',' [2020-09-01:18:16:03:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:03:INFO] Sniff delimiter as ',' [2020-09-01:18:16:03:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:03:INFO] Sniff delimiter as ',' [2020-09-01:18:16:03:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:03:INFO] Sniff delimiter as ',' [2020-09-01:18:16:03:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:03:INFO] Sniff delimiter as ',' [2020-09-01:18:16:03:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:03:INFO] Sniff delimiter as ',' [2020-09-01:18:16:03:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:03:INFO] Sniff delimiter as ',' [2020-09-01:18:16:03:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:08:INFO] Sniff delimiter as ',' [2020-09-01:18:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:08:INFO] Sniff delimiter as ',' [2020-09-01:18:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:08:INFO] Sniff delimiter as ',' [2020-09-01:18:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:08:INFO] Sniff delimiter as ',' [2020-09-01:18:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:08:INFO] Sniff delimiter as ',' [2020-09-01:18:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:08:INFO] Sniff delimiter as ',' [2020-09-01:18:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:08:INFO] Sniff delimiter as ',' [2020-09-01:18:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:08:INFO] Sniff delimiter as ',' [2020-09-01:18:16:08:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:10:INFO] Sniff delimiter as ',' [2020-09-01:18:16:10:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:11:INFO] Sniff delimiter as ',' [2020-09-01:18:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:11:INFO] Sniff delimiter as ',' [2020-09-01:18:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:11:INFO] Sniff delimiter as ',' [2020-09-01:18:16:10:INFO] Sniff delimiter as ',' [2020-09-01:18:16:10:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:11:INFO] Sniff delimiter as ',' [2020-09-01:18:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:11:INFO] Sniff delimiter as ',' [2020-09-01:18:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:11:INFO] Sniff delimiter as ',' [2020-09-01:18:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:11:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:13:INFO] Sniff delimiter as ',' [2020-09-01:18:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:13:INFO] Sniff delimiter as ',' [2020-09-01:18:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:13:INFO] Sniff delimiter as ',' [2020-09-01:18:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:13:INFO] Sniff delimiter as ',' [2020-09-01:18:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:13:INFO] Sniff delimiter as ',' [2020-09-01:18:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:13:INFO] Sniff delimiter as ',' [2020-09-01:18:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:13:INFO] Sniff delimiter as ',' [2020-09-01:18:16:13:INFO] Determined delimiter of CSV input is ',' [2020-09-01:18:16:13:INFO] Sniff delimiter as ',' [2020-09-01:18:16:13:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.3 KiB (4.8 MiB/s) with 1 file(s) remaining Completed 366.3 KiB/366.3 KiB (6.8 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-668015539882/xgboost-2020-09-01-18-11-08-333/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output -------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-03-01 22:36:32-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 21.5MB/s in 6.1s 2020-03-01 22:36:39 (13.1 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-03-01 22:48:39 Starting - Starting the training job... 2020-03-01 22:48:40 Starting - Launching requested ML instances...... 2020-03-01 22:50:09 Starting - Preparing the instances for training...... 2020-03-01 22:51:01 Downloading - Downloading input data... 2020-03-01 22:51:23 Training - Downloading the training image..Arguments: train [2020-03-01:22:51:43:INFO] Running standalone xgboost training. [2020-03-01:22:51:43:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8494.52mb [2020-03-01:22:51:43:INFO] Determined delimiter of CSV input is ',' [22:51:43] S3DistributionType set as FullyReplicated [22:51:45] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-03-01:22:51:45:INFO] Determined delimiter of CSV input is ',' [22:51:45] S3DistributionType set as FullyReplicated [22:51:46] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [22:51:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.284467#011validation-error:0.286 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [22:51:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.272067#011validation-error:0.2777 [22:51:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.274#011validation-error:0.2781 2020-03-01 22:51:43 Training - Training image download completed. Training in progress.[22:51:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [3]#011train-error:0.261933#011validation-error:0.2648 [22:51:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.259467#011validation-error:0.2673 [22:51:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [5]#011train-error:0.2452#011validation-error:0.2518 [22:51:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.244067#011validation-error:0.2545 [22:51:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.231067#011validation-error:0.2441 [22:52:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 0 pruned nodes, max_depth=5 [8]#011train-error:0.229067#011validation-error:0.2416 [22:52:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [9]#011train-error:0.226867#011validation-error:0.2401 [22:52:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [10]#011train-error:0.217533#011validation-error:0.2305 [22:52:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.212#011validation-error:0.227 [22:52:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.209867#011validation-error:0.2251 [22:52:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.207733#011validation-error:0.2216 [22:52:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [14]#011train-error:0.2052#011validation-error:0.2184 [22:52:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [15]#011train-error:0.201867#011validation-error:0.2164 [22:52:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.196867#011validation-error:0.2138 [22:52:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [17]#011train-error:0.195067#011validation-error:0.2122 [22:52:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 0 pruned nodes, max_depth=5 [18]#011train-error:0.191533#011validation-error:0.2086 [22:52:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [19]#011train-error:0.189067#011validation-error:0.2084 [22:52:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [20]#011train-error:0.1862#011validation-error:0.2058 [22:52:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [21]#011train-error:0.1846#011validation-error:0.2047 [22:52:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [22]#011train-error:0.180467#011validation-error:0.2011 [22:52:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [23]#011train-error:0.1776#011validation-error:0.1994 [22:52:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [24]#011train-error:0.1764#011validation-error:0.197 [22:52:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.1738#011validation-error:0.1951 [22:52:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [26]#011train-error:0.172333#011validation-error:0.1957 [22:52:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [27]#011train-error:0.170667#011validation-error:0.1938 [22:52:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [28]#011train-error:0.167933#011validation-error:0.1938 [22:52:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [29]#011train-error:0.165733#011validation-error:0.191 [22:52:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [30]#011train-error:0.162933#011validation-error:0.1898 [22:52:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5 [31]#011train-error:0.161333#011validation-error:0.188 [22:52:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.159533#011validation-error:0.1866 [22:52:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 16 pruned nodes, max_depth=5 [33]#011train-error:0.158733#011validation-error:0.1851 [22:52:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [34]#011train-error:0.156667#011validation-error:0.1852 [22:52:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.155133#011validation-error:0.1846 [22:52:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.153867#011validation-error:0.1825 [22:52:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.1534#011validation-error:0.1815 [22:52:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [38]#011train-error:0.152467#011validation-error:0.1816 [22:52:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [39]#011train-error:0.150333#011validation-error:0.1807 [22:52:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 0 pruned nodes, max_depth=5 [40]#011train-error:0.149067#011validation-error:0.1799 [22:52:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [41]#011train-error:0.147667#011validation-error:0.1799 [22:52:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [42]#011train-error:0.146733#011validation-error:0.1789 [22:52:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [43]#011train-error:0.147067#011validation-error:0.179 [22:52:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [44]#011train-error:0.145733#011validation-error:0.1771 [22:52:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [45]#011train-error:0.145133#011validation-error:0.1755 [22:52:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5 [46]#011train-error:0.144267#011validation-error:0.1757 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ....................Arguments: serve [2020-03-01 23:02:45 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-03-01 23:02:45 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-03-01 23:02:45 +0000] [1] [INFO] Using worker: gevent [2020-03-01 23:02:45 +0000] [38] [INFO] Booting worker with pid: 38 [2020-03-01 23:02:45 +0000] [39] [INFO] Booting worker with pid: 39 [2020-03-01 23:02:45 +0000] [40] [INFO] Booting worker with pid: 40 [2020-03-01:23:02:45:INFO] Model loaded successfully for worker : 38 [2020-03-01:23:02:45:INFO] Model loaded successfully for worker : 39 [2020-03-01 23:02:45 +0000] [41] [INFO] Booting worker with pid: 41 [2020-03-01:23:02:45:INFO] Model loaded successfully for worker : 40 [2020-03-01:23:02:46:INFO] Model loaded successfully for worker : 41 [2020-03-01:23:03:20:INFO] Sniff delimiter as ',' [2020-03-01:23:03:20:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:20:INFO] Sniff delimiter as ',' [2020-03-01:23:03:20:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:20:INFO] Sniff delimiter as ',' [2020-03-01:23:03:20:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:21:INFO] Sniff delimiter as ',' [2020-03-01:23:03:21:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:20:INFO] Sniff delimiter as ',' [2020-03-01:23:03:20:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:20:INFO] Sniff delimiter as ',' [2020-03-01:23:03:20:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:20:INFO] Sniff delimiter as ',' [2020-03-01:23:03:20:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:21:INFO] Sniff delimiter as ',' [2020-03-01:23:03:21:INFO] Determined delimiter of CSV input is ',' 2020-03-01T23:03:17.590:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-03-01:23:03:23:INFO] Sniff delimiter as ',' [2020-03-01:23:03:23:INFO] Sniff delimiter as ',' [2020-03-01:23:03:23:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:23:INFO] Sniff delimiter as ',' [2020-03-01:23:03:23:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:23:INFO] Sniff delimiter as ',' [2020-03-01:23:03:23:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:23:INFO] Sniff delimiter as ',' [2020-03-01:23:03:23:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:23:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:23:INFO] Sniff delimiter as ',' [2020-03-01:23:03:23:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:23:INFO] Sniff delimiter as ',' [2020-03-01:23:03:23:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:23:INFO] Sniff delimiter as ',' [2020-03-01:23:03:23:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:25:INFO] Sniff delimiter as ',' [2020-03-01:23:03:25:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:25:INFO] Sniff delimiter as ',' [2020-03-01:23:03:25:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:25:INFO] Sniff delimiter as ',' [2020-03-01:23:03:25:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:26:INFO] Sniff delimiter as ',' [2020-03-01:23:03:26:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:25:INFO] Sniff delimiter as ',' [2020-03-01:23:03:25:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:25:INFO] Sniff delimiter as ',' [2020-03-01:23:03:25:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:25:INFO] Sniff delimiter as ',' [2020-03-01:23:03:25:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:26:INFO] Sniff delimiter as ',' [2020-03-01:23:03:26:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:28:INFO] Sniff delimiter as ',' [2020-03-01:23:03:28:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:28:INFO] Sniff delimiter as ',' [2020-03-01:23:03:28:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:28:INFO] Sniff delimiter as ',' [2020-03-01:23:03:28:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:28:INFO] Sniff delimiter as ',' [2020-03-01:23:03:28:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:28:INFO] Sniff delimiter as ',' [2020-03-01:23:03:28:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:28:INFO] Sniff delimiter as ',' [2020-03-01:23:03:28:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:28:INFO] Sniff delimiter as ',' [2020-03-01:23:03:28:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:28:INFO] Sniff delimiter as ',' [2020-03-01:23:03:28:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:33:INFO] Sniff delimiter as ',' [2020-03-01:23:03:33:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:33:INFO] Sniff delimiter as ',' [2020-03-01:23:03:33:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:33:INFO] Sniff delimiter as ',' [2020-03-01:23:03:33:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:33:INFO] Sniff delimiter as ',' [2020-03-01:23:03:33:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:33:INFO] Sniff delimiter as ',' [2020-03-01:23:03:33:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:33:INFO] Sniff delimiter as ',' [2020-03-01:23:03:33:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:33:INFO] Sniff delimiter as ',' [2020-03-01:23:03:33:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:33:INFO] Sniff delimiter as ',' [2020-03-01:23:03:33:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:35:INFO] Sniff delimiter as ',' [2020-03-01:23:03:35:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:35:INFO] Sniff delimiter as ',' [2020-03-01:23:03:35:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:35:INFO] Sniff delimiter as ',' [2020-03-01:23:03:35:INFO] Sniff delimiter as ',' [2020-03-01:23:03:35:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:35:INFO] Sniff delimiter as ',' [2020-03-01:23:03:35:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:35:INFO] Sniff delimiter as ',' [2020-03-01:23:03:35:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:35:INFO] Sniff delimiter as ',' [2020-03-01:23:03:35:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:35:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:35:INFO] Sniff delimiter as ',' [2020-03-01:23:03:35:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:37:INFO] Sniff delimiter as ',' [2020-03-01:23:03:37:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:38:INFO] Sniff delimiter as ',' [2020-03-01:23:03:38:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:37:INFO] Sniff delimiter as ',' [2020-03-01:23:03:37:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:38:INFO] Sniff delimiter as ',' [2020-03-01:23:03:38:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:38:INFO] Sniff delimiter as ',' [2020-03-01:23:03:38:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:38:INFO] Sniff delimiter as ',' [2020-03-01:23:03:38:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:38:INFO] Sniff delimiter as ',' [2020-03-01:23:03:38:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:03:38:INFO] Sniff delimiter as ',' [2020-03-01:23:03:38:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/371.0 KiB (3.1 MiB/s) with 1 file(s) remaining Completed 371.0 KiB/371.0 KiB (4.4 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-595380434278/xgboost-2020-03-01-22-59-30-181/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() # new_X[100] ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed # vectorizer.fit(train_X).toarray() # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ....................Arguments: serve [2020-03-01 23:14:38 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-03-01 23:14:38 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-03-01 23:14:38 +0000] [1] [INFO] Using worker: gevent [2020-03-01 23:14:38 +0000] [38] [INFO] Booting worker with pid: 38 [2020-03-01 23:14:38 +0000] [39] [INFO] Booting worker with pid: 39 [2020-03-01 23:14:38 +0000] [40] [INFO] Booting worker with pid: 40 [2020-03-01:23:14:38:INFO] Model loaded successfully for worker : 38 [2020-03-01:23:14:38:INFO] Model loaded successfully for worker : 39 [2020-03-01:23:14:38:INFO] Model loaded successfully for worker : 40 [2020-03-01 23:14:38 +0000] [41] [INFO] Booting worker with pid: 41 [2020-03-01:23:14:38:INFO] Model loaded successfully for worker : 41 2020-03-01T23:15:07.378:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-03-01:23:15:09:INFO] Sniff delimiter as ',' [2020-03-01:23:15:09:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:09:INFO] Sniff delimiter as ',' [2020-03-01:23:15:09:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:09:INFO] Sniff delimiter as ',' [2020-03-01:23:15:09:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:09:INFO] Sniff delimiter as ',' [2020-03-01:23:15:09:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:09:INFO] Sniff delimiter as ',' [2020-03-01:23:15:09:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:09:INFO] Sniff delimiter as ',' [2020-03-01:23:15:09:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:10:INFO] Sniff delimiter as ',' [2020-03-01:23:15:10:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:10:INFO] Sniff delimiter as ',' [2020-03-01:23:15:10:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:12:INFO] Sniff delimiter as ',' [2020-03-01:23:15:12:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:12:INFO] Sniff delimiter as ',' [2020-03-01:23:15:12:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:12:INFO] Sniff delimiter as ',' [2020-03-01:23:15:12:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:12:INFO] Sniff delimiter as ',' [2020-03-01:23:15:12:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:12:INFO] Sniff delimiter as ',' [2020-03-01:23:15:12:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:12:INFO] Sniff delimiter as ',' [2020-03-01:23:15:12:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:13:INFO] Sniff delimiter as ',' [2020-03-01:23:15:13:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:13:INFO] Sniff delimiter as ',' [2020-03-01:23:15:13:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:14:INFO] Sniff delimiter as ',' [2020-03-01:23:15:14:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:14:INFO] Sniff delimiter as ',' [2020-03-01:23:15:14:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:14:INFO] Sniff delimiter as ',' [2020-03-01:23:15:14:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:14:INFO] Sniff delimiter as ',' [2020-03-01:23:15:14:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:15:INFO] Sniff delimiter as ',' [2020-03-01:23:15:15:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:15:INFO] Sniff delimiter as ',' [2020-03-01:23:15:15:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:15:INFO] Sniff delimiter as ',' [2020-03-01:23:15:15:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:15:INFO] Sniff delimiter as ',' [2020-03-01:23:15:15:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:17:INFO] Sniff delimiter as ',' [2020-03-01:23:15:17:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:17:INFO] Sniff delimiter as ',' [2020-03-01:23:15:17:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:17:INFO] Sniff delimiter as ',' [2020-03-01:23:15:17:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:17:INFO] Sniff delimiter as ',' [2020-03-01:23:15:17:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:17:INFO] Sniff delimiter as ',' [2020-03-01:23:15:17:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:17:INFO] Sniff delimiter as ',' [2020-03-01:23:15:17:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:17:INFO] Sniff delimiter as ',' [2020-03-01:23:15:17:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:17:INFO] Sniff delimiter as ',' [2020-03-01:23:15:17:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:22:INFO] Sniff delimiter as ',' [2020-03-01:23:15:22:INFO] Sniff delimiter as ',' [2020-03-01:23:15:22:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:22:INFO] Sniff delimiter as ',' [2020-03-01:23:15:22:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:22:INFO] Sniff delimiter as ',' [2020-03-01:23:15:22:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:22:INFO] Sniff delimiter as ',' [2020-03-01:23:15:22:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:22:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:22:INFO] Sniff delimiter as ',' [2020-03-01:23:15:22:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:22:INFO] Sniff delimiter as ',' [2020-03-01:23:15:22:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:22:INFO] Sniff delimiter as ',' [2020-03-01:23:15:22:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:24:INFO] Sniff delimiter as ',' [2020-03-01:23:15:24:INFO] Sniff delimiter as ',' [2020-03-01:23:15:24:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:24:INFO] Sniff delimiter as ',' [2020-03-01:23:15:24:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:24:INFO] Sniff delimiter as ',' [2020-03-01:23:15:24:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:24:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:24:INFO] Sniff delimiter as ',' [2020-03-01:23:15:24:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:24:INFO] Sniff delimiter as ',' [2020-03-01:23:15:24:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:25:INFO] Sniff delimiter as ',' [2020-03-01:23:15:25:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:25:INFO] Sniff delimiter as ',' [2020-03-01:23:15:25:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:27:INFO] Sniff delimiter as ',' [2020-03-01:23:15:27:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:27:INFO] Sniff delimiter as ',' [2020-03-01:23:15:27:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:27:INFO] Sniff delimiter as ',' [2020-03-01:23:15:27:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:27:INFO] Sniff delimiter as ',' [2020-03-01:23:15:27:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:27:INFO] Sniff delimiter as ',' [2020-03-01:23:15:27:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:27:INFO] Sniff delimiter as ',' [2020-03-01:23:15:27:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:27:INFO] Sniff delimiter as ',' [2020-03-01:23:15:27:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:15:27:INFO] Sniff delimiter as ',' [2020-03-01:23:15:27:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/371.3 KiB (2.9 MiB/s) with 1 file(s) remaining Completed 371.3 KiB/371.3 KiB (4.0 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-595380434278/xgboost-2020-03-01-23-11-27-698/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-03-01-22-48-38-871 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['take', 'serbian', 'least', 'balkan', 'familiar', 'understand', 'enjoy', 'mani', 'situat', 'charact', 'joke', 'zavet', 'well', 'mani', 'film', 'kusturica', 'see', 'exampl', 'open', 'scene', 'remot', 'villag', 'serbian', 'mountain', 'low', 'tech', 'devic', 'defend', 'integr', 'way', 'life', 'inhabit', 'nostalgia', 'good', 'day', 'communist', 'rule', 'awaken', 'young', 'brat', 'watch', 'nude', 'teacher', 'sound', 'soviet', 'hymn', 'kusturica', 'tri', 'depart', 'tragic', 'histori', 'serbia', 'describ', 'previou', 'movi', 'creat', 'whole', 'world', 'process', 'polit', 'film', 'explicit', 'polit', 'film', 'case', 'fun', 'film', 'kusturica', 'creat', 'set', 'charact', 'care', 'music', 'play', 'activ', 'role', 'film', 'movi', 'style', 'direct', 'grotesqu', 'feel', 'charact', 'know', 'want', 'us', 'feel', 'space', 'develop', 'meld', 'present', 'histori', 'magic', 'colour', 'seem', 'douanier', 'rousseau', 'modern', 'cinema', 'perfect', 'film', 'either', 'princip', 'flaw', 'length', 'edit', 'shorten', 'would', 'use', 'point', 'time', 'director', 'seem', 'run', 'idea', 'repetit', 'show', 'yet', 'one', 'catch', 'amus', 'move', 'film', 'seen', 'late', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'reincarn', 'spill', 'weari', 'ghetto', 'playboy', '21st', 'victorian'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'sophi', 'omin', 'masterson', 'banana', 'dubiou', 'optimist', 'orchestr'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. The function to construct the new data inserts the word `banana` randomly in the documents. However, this word was not part of the original vocabulary and, as it was inserted randomly, chances are that it occured many many times. In this sense, the model accuracy is affected by the appearance of a new and frequent word. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-03-01 23:40:03 Starting - Starting the training job... 2020-03-01 23:40:05 Starting - Launching requested ML instances...... 2020-03-01 23:41:08 Starting - Preparing the instances for training... 2020-03-01 23:41:59 Downloading - Downloading input data...... 2020-03-01 23:42:54 Training - Training image download completed. Training in progress.Arguments: train [2020-03-01:23:42:56:INFO] Running standalone xgboost training. [2020-03-01:23:42:56:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8508.11mb [2020-03-01:23:42:56:INFO] Determined delimiter of CSV input is ',' [23:42:56] S3DistributionType set as FullyReplicated [23:42:58] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-03-01:23:42:58:INFO] Determined delimiter of CSV input is ',' [23:42:58] S3DistributionType set as FullyReplicated [23:42:59] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [23:43:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 0 pruned nodes, max_depth=5 [0]#011train-error:0.3122#011validation-error:0.3126 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [23:43:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.296667#011validation-error:0.2997 [23:43:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.284667#011validation-error:0.2864 [23:43:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 10 pruned nodes, max_depth=5 [3]#011train-error:0.276067#011validation-error:0.2813 [23:43:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [4]#011train-error:0.2724#011validation-error:0.2784 [23:43:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [5]#011train-error:0.255467#011validation-error:0.2651 [23:43:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.2518#011validation-error:0.2622 [23:43:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 14 pruned nodes, max_depth=5 [7]#011train-error:0.243933#011validation-error:0.2564 [23:43:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [8]#011train-error:0.236467#011validation-error:0.2492 [23:43:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.229067#011validation-error:0.2442 [23:43:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.222733#011validation-error:0.237 [23:43:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [11]#011train-error:0.219467#011validation-error:0.2361 [23:43:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [12]#011train-error:0.2166#011validation-error:0.233 [23:43:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [13]#011train-error:0.2158#011validation-error:0.2315 [23:43:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [14]#011train-error:0.212267#011validation-error:0.2272 [23:43:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.2046#011validation-error:0.2231 [23:43:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [16]#011train-error:0.202667#011validation-error:0.2216 [23:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 10 pruned nodes, max_depth=5 [17]#011train-error:0.199867#011validation-error:0.2207 [23:43:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [18]#011train-error:0.199333#011validation-error:0.2181 [23:43:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.197#011validation-error:0.2154 [23:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [20]#011train-error:0.195133#011validation-error:0.2149 [23:43:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 0 pruned nodes, max_depth=5 [21]#011train-error:0.193667#011validation-error:0.2147 [23:43:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [22]#011train-error:0.1932#011validation-error:0.2129 [23:43:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.189267#011validation-error:0.2104 [23:43:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [24]#011train-error:0.1858#011validation-error:0.2072 [23:43:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [25]#011train-error:0.183333#011validation-error:0.2071 [23:43:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [26]#011train-error:0.182667#011validation-error:0.2071 [23:43:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 18 pruned nodes, max_depth=5 [27]#011train-error:0.180733#011validation-error:0.2047 [23:43:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [28]#011train-error:0.177267#011validation-error:0.2037 [23:43:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [29]#011train-error:0.176267#011validation-error:0.202 [23:43:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.173333#011validation-error:0.2003 [23:43:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [31]#011train-error:0.1732#011validation-error:0.1994 [23:43:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [32]#011train-error:0.171333#011validation-error:0.1986 [23:43:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [33]#011train-error:0.1694#011validation-error:0.197 [23:43:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [34]#011train-error:0.1684#011validation-error:0.1951 [23:43:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.1666#011validation-error:0.1941 [23:43:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.164733#011validation-error:0.1916 [23:43:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.163333#011validation-error:0.1917 [23:43:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [38]#011train-error:0.1632#011validation-error:0.1905 [23:43:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.160667#011validation-error:0.1906 [23:43:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [40]#011train-error:0.1604#011validation-error:0.1917 [23:43:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [41]#011train-error:0.158267#011validation-error:0.1903 [23:43:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [42]#011train-error:0.156867#011validation-error:0.19 [23:43:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [43]#011train-error:0.155733#011validation-error:0.1892 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output .....................Arguments: serve [2020-03-01 23:49:04 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-03-01 23:49:04 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-03-01 23:49:04 +0000] [1] [INFO] Using worker: gevent [2020-03-01 23:49:04 +0000] [38] [INFO] Booting worker with pid: 38 [2020-03-01 23:49:05 +0000] [39] [INFO] Booting worker with pid: 39 [2020-03-01:23:49:05:INFO] Model loaded successfully for worker : 38 [2020-03-01 23:49:05 +0000] [40] [INFO] Booting worker with pid: 40 [2020-03-01 23:49:05 +0000] [41] [INFO] Booting worker with pid: 41 [2020-03-01:23:49:05:INFO] Model loaded successfully for worker : 39 [2020-03-01:23:49:05:INFO] Model loaded successfully for worker : 40 [2020-03-01:23:49:05:INFO] Model loaded successfully for worker : 41 [2020-03-01:23:49:46:INFO] Sniff delimiter as ',' [2020-03-01:23:49:46:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:47:INFO] Sniff delimiter as ',' [2020-03-01:23:49:47:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:46:INFO] Sniff delimiter as ',' [2020-03-01:23:49:46:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:47:INFO] Sniff delimiter as ',' [2020-03-01:23:49:47:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:47:INFO] Sniff delimiter as ',' [2020-03-01:23:49:47:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:47:INFO] Sniff delimiter as ',' [2020-03-01:23:49:47:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:47:INFO] Sniff delimiter as ',' [2020-03-01:23:49:47:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:47:INFO] Sniff delimiter as ',' [2020-03-01:23:49:47:INFO] Determined delimiter of CSV input is ',' 2020-03-01T23:49:44.601:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-03-01:23:49:49:INFO] Sniff delimiter as ',' [2020-03-01:23:49:49:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:49:INFO] Sniff delimiter as ',' [2020-03-01:23:49:49:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:49:INFO] Sniff delimiter as ',' [2020-03-01:23:49:49:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:49:INFO] Sniff delimiter as ',' [2020-03-01:23:49:49:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:49:INFO] Sniff delimiter as ',' [2020-03-01:23:49:49:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:49:INFO] Sniff delimiter as ',' [2020-03-01:23:49:49:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:50:INFO] Sniff delimiter as ',' [2020-03-01:23:49:50:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:50:INFO] Sniff delimiter as ',' [2020-03-01:23:49:50:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:52:INFO] Sniff delimiter as ',' [2020-03-01:23:49:52:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:52:INFO] Sniff delimiter as ',' [2020-03-01:23:49:52:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:52:INFO] Sniff delimiter as ',' [2020-03-01:23:49:52:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:52:INFO] Sniff delimiter as ',' [2020-03-01:23:49:52:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:52:INFO] Sniff delimiter as ',' [2020-03-01:23:49:52:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:52:INFO] Sniff delimiter as ',' [2020-03-01:23:49:52:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:52:INFO] Sniff delimiter as ',' [2020-03-01:23:49:52:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:52:INFO] Sniff delimiter as ',' [2020-03-01:23:49:52:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:54:INFO] Sniff delimiter as ',' [2020-03-01:23:49:54:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:54:INFO] Sniff delimiter as ',' [2020-03-01:23:49:54:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:54:INFO] Sniff delimiter as ',' [2020-03-01:23:49:54:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:54:INFO] Sniff delimiter as ',' [2020-03-01:23:49:54:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:55:INFO] Sniff delimiter as ',' [2020-03-01:23:49:55:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:54:INFO] Sniff delimiter as ',' [2020-03-01:23:49:54:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:54:INFO] Sniff delimiter as ',' [2020-03-01:23:49:54:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:55:INFO] Sniff delimiter as ',' [2020-03-01:23:49:55:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:57:INFO] Sniff delimiter as ',' [2020-03-01:23:49:57:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:57:INFO] Sniff delimiter as ',' [2020-03-01:23:49:57:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:59:INFO] Sniff delimiter as ',' [2020-03-01:23:49:59:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:59:INFO] Sniff delimiter as ',' [2020-03-01:23:49:59:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:59:INFO] Sniff delimiter as ',' [2020-03-01:23:49:59:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:59:INFO] Sniff delimiter as ',' [2020-03-01:23:49:59:INFO] Sniff delimiter as ',' [2020-03-01:23:49:59:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:59:INFO] Sniff delimiter as ',' [2020-03-01:23:49:59:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:59:INFO] Sniff delimiter as ',' [2020-03-01:23:49:59:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:59:INFO] Sniff delimiter as ',' [2020-03-01:23:49:59:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:49:59:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:01:INFO] Sniff delimiter as ',' [2020-03-01:23:50:01:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:01:INFO] Sniff delimiter as ',' [2020-03-01:23:50:01:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:01:INFO] Sniff delimiter as ',' [2020-03-01:23:50:01:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:01:INFO] Sniff delimiter as ',' [2020-03-01:23:50:01:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:02:INFO] Sniff delimiter as ',' [2020-03-01:23:50:02:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:02:INFO] Sniff delimiter as ',' [2020-03-01:23:50:02:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:02:INFO] Sniff delimiter as ',' [2020-03-01:23:50:02:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:02:INFO] Sniff delimiter as ',' [2020-03-01:23:50:02:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:04:INFO] Sniff delimiter as ',' [2020-03-01:23:50:04:INFO] Sniff delimiter as ',' [2020-03-01:23:50:04:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:04:INFO] Sniff delimiter as ',' [2020-03-01:23:50:04:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:04:INFO] Sniff delimiter as ',' [2020-03-01:23:50:04:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:04:INFO] Sniff delimiter as ',' [2020-03-01:23:50:04:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:04:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:04:INFO] Sniff delimiter as ',' [2020-03-01:23:50:04:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:04:INFO] Sniff delimiter as ',' [2020-03-01:23:50:04:INFO] Determined delimiter of CSV input is ',' [2020-03-01:23:50:04:INFO] Sniff delimiter as ',' [2020-03-01:23:50:04:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.4 KiB (3.0 MiB/s) with 1 file(s) remaining Completed 366.4 KiB/366.4 KiB (4.1 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-595380434278/xgboost-2020-03-01-23-45-47-092/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-05-21 14:56:19-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 51.2MB/s in 1.6s 2020-05-21 14:56:21 (51.2 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test import time start_time = time.time() # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) end_time = time.time() time_passed_s = end_time - start_time time_passed_m = floor(time_passed_s/60) time_passed_h = floor(time_passed_m/60) time_passed_m = time_passed_m - time_passed_h*60 time_passed_s = time_passed_s - time_passed_m*60 - time_passed_h*60*60 print("Preprocessing Took ",time_passed_h,"h : ",time_passed_m, "m : ", "s") ###Output _____no_output_____ ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) print(vocabulary) ###Output {'give': 1959, '9': 93, '10': 4, 'need': 3030, 'view': 4779, 'current': 1148, 'mind': 2890, 'frame': 1845, 'fresh': 1865, 'perspect': 3299, 'rememb': 3673, 'amaz': 233, 'psycholog': 3518, 'thriller': 4500, 'dazzl': 1196, 'brain': 607, 'eye': 1659, 'watch': 4844, 'would': 4957, 'everi': 1601, 'time': 4519, 'came': 706, 'hbo': 2114, 'back': 405, 'day': 1195, 'probabl': 3470, '1990': 45, 'might': 2874, 'even': 1597, 'record': 3626, 'long': 2692, 'gone': 1987, 'sure': 4365, 'wrote': 4970, 'tri': 4601, 'get': 1944, 'titl': 4528, 'carri': 752, 'hope': 2202, 'find': 1756, 'local': 2682, 'video': 4777, 'store': 4270, 'highli': 2158, 'recommend': 3625, 'somewher': 4137, 'interest': 2377, 'least': 2608, 'enjoy': 1545, 'movi': 2972, 'edward': 1478, 'anyth': 284, 'sort': 4147, 'strang': 4279, 'along': 218, 'line': 2661, 'one': 3155, 'im': 2273, 'still': 4255, 'doubt': 1386, 'horribl': 2209, 'wors': 4950, 'ever': 1600, 'saw': 3865, 'actor': 139, 'pain': 3217, 'imposs': 2293, 'text': 4472, 'submit': 4317, 'comment': 960, 'agre': 180, 'term': 4460, 'laid': 2567, 'statement': 4236, 'must': 2993, 'origin': 3173, 'work': 4944, 'normal': 3088, 'post': 3403, 'site': 4064, 'within': 4928, '2': 55, '3': 73, 'busi': 683, 'meet': 2835, 'pleas': 3355, 'write': 4966, 'english': 1542, 'board': 561, 'mark': 2784, 'support': 4361, 'though': 4491, 'break': 616, 'insert': 2352, 'leav': 2610, 'blank': 538, 'shot': 4012, 'car': 734, 'someon': 4133, 'play': 3352, 'els': 1506, 'latest': 2583, 'song': 4139, 'react': 3601, 'accord': 124, 'voic': 4802, 'wonder': 4936, 'made': 2742, 'mani': 2766, 'scene': 3876, 'come': 952, 'camera': 708, 'descend': 1266, 'parti': 3246, 'true': 4620, 'exploit': 1644, 'max': 2813, 'total': 4558, 'fabric': 1662, 'either': 1488, 'way': 4851, 'uncomfort': 4658, 'purpos': 3534, 'thru': 4505, 'kept': 2518, 'footag': 1814, 'taken': 4411, 'face': 1664, 'valu': 4740, 'nice': 3059, 'portrait': 3396, 'tortur': 4556, 'geniu': 1933, 'believ': 482, 'film': 1750, 'budget': 663, '20': 56, 'million': 2886, 'like': 2654, 'know': 2545, 'money': 2937, 'went': 4875, 'monkey': 2939, 'could': 1075, 'make': 2757, 'better': 502, 'cgi': 793, 'effect': 1480, 'wast': 4843, 'hour': 2221, 'dread': 1403, 'piec': 3328, 'garbag': 1915, 'although': 225, 'admit': 151, 'machin': 2738, 'martian': 2793, 'look': 2694, 'realli': 3612, 'cool': 1056, 'station': 4238, '1': 3, 'game': 1910, 'earli': 1451, 'pc': 3274, 'mid': 2871, '90': 94, 'puzzl': 3539, 'go': 1975, 'great': 2019, 'good': 1989, 'old': 3148, 'fashion': 1701, 'model': 2926, 'comput': 988, 'control': 1043, 'la': 2559, 'georg': 1938, 'luca': 2716, 'actual': 141, 'care': 737, 'ruin': 3816, 'case': 756, 'quit': 3554, 'possibl': 3402, 'worst': 4952, 'rather': 3592, 'sit': 4062, '24': 69, 'repeat': 3683, 'screen': 3896, 'hate': 2109, 'complet': 978, 'say': 3866, 'unlik': 4698, 'bad': 410, 'plan': 3345, 'killer': 2528, 'tomato': 4539, 'santa': 3854, 'clau': 890, 'conquer': 1012, 'special': 4169, 'place': 3342, 'heart': 2123, 'love': 2709, 'dvd': 1442, 'librari': 2644, 'sold': 4123, 'guy': 2062, 'dollar': 1371, 'bet': 499, 'spent': 4177, 'vice': 4770, 'cast': 762, 'crew': 1114, 'shame': 3978, 'want': 4830, 'jacki': 2420, 'chan': 801, 'classic': 889, 'directori': 1321, 'featur': 1718, 'polic': 3378, 'stori': 4271, '1985': 40, 'among': 243, 'top': 4550, 'modern': 2928, 'action': 137, 'hong': 2196, 'kong': 2549, 'simpli': 4044, 'includ': 2305, 'usual': 4727, 'kung': 2554, 'fu': 1879, 'also': 222, 'gun': 2058, 'urban': 4719, 'later': 2582, 'becom': 467, 'popular': 3391, 'typic': 4643, 'hk': 2174, 'director': 1320, 'john': 2465, 'woo': 4938, 'mix': 2918, 'two': 4640, 'element': 1496, 'style': 4311, 'result': 3716, 'wild': 4907, 'sound': 4149, 'protect': 3506, 'import': 2290, 'wit': 4926, 'soon': 4140, 'power': 3412, 'gangster': 1913, 'boss': 588, 'ring': 3752, 'crimin': 1117, 'activ': 138, 'girlfriend': 1958, 'young': 4989, 'sweet': 4389, 'maggi': 2746, 'subsequ': 4319, 'role': 3780, 'heroic': 2147, 'trio': 4610, 'sequel': 3950, 'johnni': 2466, 'soul': 4148, 'david': 1192, 'other': 3179, 'plot': 3362, 'simpl': 4043, 'first': 1764, 'cours': 1085, 'stunt': 4309, 'charact': 808, 'hurt': 2246, 'pretti': 3446, 'badli': 412, 'coupl': 1083, 'end': 1531, 'hit': 2171, 'head': 2115, 'near': 3021, 'nasti': 3012, 'sharp': 3984, 'glass': 1965, 'team': 4435, 'member': 2843, 'almost': 216, 'got': 1996, 'kill': 2527, 'bu': 658, 'stop': 4269, 'right': 3751, 'spit': 4190, 'fli': 1786, 'suppos': 4362, 'park': 3241, 'front': 1874, 'flight': 1788, 'short': 4008, 'hospit': 2215, 'level': 2638, 'injuri': 2346, 'credit': 1110, 'behind': 478, 'imageri': 2275, 'imag': 2274, 'injur': 2345, 'tasteless': 4427, 'fortun': 1834, 'sever': 3966, 'plenti': 3359, 'imagin': 2276, 'expect': 1633, 'legendari': 2619, 'shop': 4007, 'mall': 2761, 'abl': 100, 'deliv': 1242, 'alway': 229, 'tell': 4449, 'edit': 1475, 'show': 4016, 'perfect': 3288, 'ultra': 4649, 'compar': 969, 'hollywood': 2184, 'effort': 1482, 'instanc': 2360, 'hardli': 2098, 'slow': 4097, 'moment': 2935, 'never': 3049, 'bore': 583, 'hard': 2094, 'neg': 3032, 'point': 3370, 'cannot': 723, 'stand': 4224, 'comedi': 954, 'funni': 1891, 'especi': 1577, 'error': 1575, 'amateurish': 232, 'screenplay': 3897, 'accid': 119, 'writer': 4967, 'add': 145, 'without': 4929, 'necessarili': 3027, 'understand': 4670, 'sign': 4032, 'script': 3900, 'stupid': 4310, 'dialogu': 1295, 'mean': 2822, 'stab': 4213, 'murder': 2985, 'attempt': 362, 'begin': 474, 'shout': 4014, 'scream': 3895, 'act': 136, 'drunken': 1424, 'clown': 919, 'slapstick': 4079, 'nightmar': 3068, 'girl': 1957, 'kind': 2530, 'inept': 2327, 'see': 3920, 'otherwis': 3180, 'genr': 1934, 'speak': 4167, 'thought': 4492, 'loud': 2704, 'thing': 4484, 'clear': 894, 'viewer': 4780, 'thu': 4506, 'light': 2651, 'comic': 957, 'seriou': 3955, 'forgiven': 1825, 'easili': 1459, 'fast': 1702, 'speed': 4174, 'full': 1883, 'impact': 2285, 'ball': 421, 'wall': 4821, 'adventur': 159, 'miracl': 2901, 'sad': 3833, 'us': 4721, 'produc': 3476, 'jame': 2426, 'audienc': 370, 'return': 3723, 'person': 3297, 'invent': 2392, 'definit': 1233, 'today': 4529, '80': 92, 'insan': 2351, 'somewhat': 4136, 'flaw': 1781, 'mention': 2849, '7': 87, 'exactli': 1614, 'richard': 3742, 'harri': 2102, 'finest': 1758, 'dement': 1247, 'death': 1206, 'whale': 4881, 'wife': 4905, 'child': 839, 'charlott': 817, 'expert': 1639, 'involv': 2397, 'yell': 4983, 'lot': 2702, 'natur': 3017, 'alon': 217, 'listen': 2670, 'somehow': 4132, 'battl': 452, 'reveng': 3728, 'craze': 1101, 'except': 1619, 'round': 3805, 'mirror': 2903, 'endless': 1533, 'reflect': 3641, 'bo': 560, 'derek': 1264, 'particularli': 3250, 'unpleas': 4701, 'run': 3820, 'robert': 3770, 'fare': 1694, 'well': 4872, 'polit': 3382, 'documentari': 1368, 'recent': 3619, 'call': 703, 'fight': 1744, 'examin': 1616, 'infam': 2330, 'militari': 2882, 'industri': 2326, 'complex': 979, 'grip': 2037, 'nation': 3015, 'consid': 1019, 'war': 4831, 'iraq': 2399, 'yet': 4986, 'far': 1692, 'famou': 1686, 'seri': 3953, 'name': 3006, 'world': 4946, 'frank': 1851, 'oscar': 3177, 'categori': 768, 'seven': 3964, 'truli': 4621, 'mere': 2851, 'vein': 4756, 'triumph': 4615, 'recycl': 3630, 'said': 3839, 'fact': 1667, 'vital': 4799, 'inform': 2336, 'gener': 1930, 'bbc': 454, 'lack': 2563, 'primari': 3456, 'sourc': 4152, 'less': 2632, 'valuabl': 4741, 'skill': 4071, 'purchas': 3531, 'use': 4723, 'found': 1839, 'opportun': 3163, 'select': 3929, 'free': 1859, 'chose': 855, 'four': 1840, 'collect': 940, 'rare': 3589, 'someth': 4134, 'worth': 4953, 'extra': 1656, 'qualiti': 3543, 'print': 3464, 'vari': 4748, 'provid': 3510, 'insight': 2354, 'american': 240, 'third': 4486, 'centuri': 787, 'ago': 179, 'racism': 3561, 'warner': 4837, 'brother': 650, 'pro': 3469, 'cartoon': 754, 'era': 1571, 'noth': 3098, 'wrong': 4969, 'blatant': 540, 'distort': 1354, '1945': 22, 'nazi': 3020, 'strike': 4289, 'divid': 1361, 'britain': 637, 'russia': 3825, 'china': 844, 'america': 239, 'overal': 3195, 'obviou': 3126, 'reason': 3615, 'subtl': 4323, 'reveal': 3726, 'x': 4975, 'refer': 3640, 'graphic': 2012, 'whole': 4894, 'japanes': 2431, 'sword': 4394, 'pierc': 3329, 'center': 784, 'appli': 296, 'past': 3258, 'standard': 4225, 'lesson': 2634, 'avoid': 386, 'foreign': 1819, 'whose': 4898, 'three': 4497, 'major': 2756, 'fought': 1837, 'sinc': 4051, 'vietnam': 4778, 'much': 2977, 'histor': 2168, 'may': 2814, 'help': 2141, 'truth': 4624, 'lie': 2645, 'propaganda': 3496, 'realiz': 3611, 'posit': 3400, 'last': 2580, 'year': 4981, 'navi': 3019, 'studi': 4304, 'distanc': 1350, 'away': 393, 'men': 2846, 'honor': 2197, 'base': 437, 'experi': 1636, 'servic': 3959, 'black': 530, 'master': 2801, 'chief': 838, 'struggl': 4300, 'overcom': 3196, 'eager': 1448, 'best': 498, 'side': 4027, 'u': 4644, 'helicopt': 2137, 'assault': 340, 'instal': 2359, 'engross': 1543, 'tale': 4412, 'sailor': 3840, 'rage': 3566, 'wood': 4939, 'antwon': 280, 'fisher': 1768, 'denzel': 1255, 'washington': 4842, 'debut': 1209, 'star': 4229, 'psychiatrist': 3515, 'treat': 4595, 'deepli': 1226, 'luke': 2726, 'obvious': 3127, 'bright': 631, 'enlist': 1547, 'man': 2763, 'assign': 344, 'seem': 3924, 'launch': 2589, 'minim': 2894, 'provoc': 3511, 'fellow': 1725, 'sent': 3944, 'part': 3245, 'pre': 3417, 'separ': 3948, 'proceed': 3473, 'slowli': 4098, 'open': 3158, 'aw': 387, 'childhood': 840, 'neglect': 3033, 'brutal': 655, 'develop': 1289, 'increasingli': 2314, 'trust': 4623, 'doctor': 1366, 'courag': 1084, 'pursu': 3535, 'beauti': 465, 'joy': 2479, 'engag': 1539, 'ask': 334, 'question': 3547, 'seek': 3923, 'answer': 269, 'howev': 2226, 'order': 3169, 'grow': 2044, 'move': 2970, 'conflict': 1005, 'destruct': 1282, 'behavior': 476, 'main': 2752, 'transcend': 4580, 'race': 3558, 'evil': 1608, 'foster': 1836, 'famili': 1684, 'set': 3961, 'davi': 1191, 'actress': 140, 'seen': 3926, 'number': 3111, 'pictur': 3326, 'mom': 2934, 'mr': 2973, 'nelson': 3039, 'earn': 1453, 'exist': 1629, 'gut': 2061, 'patient': 3263, 'interact': 2376, 'follow': 1806, 'certain': 790, 'predict': 3425, 'transfer': 4581, 'counter': 1077, 'stuff': 4306, 'earnest': 1454, 'realist': 3609, 'told': 4536, 'period': 3293, 'real': 3606, 'sens': 3938, 'excel': 1618, 'dealt': 1203, 'crush': 1134, 'direct': 1318, 'surviv': 4374, 'terribl': 4463, 'proclaim': 3475, 'victim': 4772, 'tall': 4416, 'size': 4068, 'signific': 4034, 'brave': 611, 'strong': 4294, 'claim': 882, 'decent': 1213, 'life': 2646, 'aid': 183, 'command': 958, 'address': 148, 'peopl': 3283, 'list': 2669, 'offic': 3140, 'rank': 3583, 'big': 510, 'critic': 1121, 'theater': 4475, 'deserv': 1271, 'wide': 4900, 'distribut': 1357, 'nomin': 3081, '8': 91, 'phil': 3311, 'alien': 207, 'quirki': 3553, 'humour': 2240, 'around': 316, 'odd': 3134, 'everyth': 1605, 'progress': 3487, 'joke': 2469, 'anymor': 282, 'low': 2711, 'that': 4474, 'problem': 3471, 'eventu': 1599, 'lost': 2701, 'appeal': 292, 'similar': 4039, 'anoth': 268, 'planet': 3347, 'michael': 2868, 'jordan': 2474, 'lloyd': 2678, 'martin': 2794, 'clever': 897, 'goldberg': 1983, 'han': 2082, 'straight': 4275, 'german': 1941, 'school': 3881, 'favor': 1709, 'ton': 4542, 'symbol': 4395, 'surreal': 4372, 'wagner': 4814, 'final': 1753, 'masterpiec': 2802, 'written': 4968, 'friday': 1867, 'opera': 3160, 'hous': 2222, 'music': 2990, 'score': 3889, 'accur': 126, 'manner': 2771, 'perform': 3290, 'bold': 569, 'take': 4410, 'crown': 1128, 'bizarr': 529, 'cinematographi': 874, 'known': 2547, 'controversi': 1044, 'prior': 3465, 'releas': 3660, 'hitler': 2173, 'anti': 274, 'grand': 2008, 'daughter': 1189, 'disturb': 1358, 'aspect': 336, 'sung': 4349, 'male': 2760, 'throughout': 4502, 'enchant': 1528, 'kiss': 2535, 'transform': 4582, 'femal': 1727, 'gender': 1928, 'bend': 490, 'display': 1347, 'feminin': 1728, 'quest': 3546, 'holi': 2180, 'serv': 3957, 'mankind': 2769, 'redeem': 3632, 'christ': 858, 'blood': 552, 'photograph': 3319, 'notori': 3101, 'sinist': 4056, 'figur': 1746, 'mistress': 2915, 'flag': 1773, 'hang': 2087, 'outsid': 3192, 'journey': 2478, '19th': 53, '20th': 66, 'tempt': 4452, 'flower': 1797, 'nude': 3108, 'portray': 3397, 'corrupt': 1071, 'mari': 2779, 'eve': 1595, 'soprano': 4143, 'ultim': 4648, 'fan': 1687, 'type': 4642, 'european': 1592, 'intellectu': 2371, 'appreci': 297, 'histori': 2169, 'lover': 2710, 'inde': 2316, 'sing': 4053, 'compel': 972, 'focus': 1802, 'intens': 2374, 'depth': 1261, 'greater': 2020, 'stage': 4216, 'king': 2532, 'sensual': 3943, 'dramat': 1400, 'excit': 1622, 'delv': 1245, 'torment': 4553, 'state': 4235, 'perfectli': 3289, 'product': 3477, 'certainli': 791, 'un': 4651, 'concept': 993, 'christian': 860, 'ceremoni': 789, 'castl': 763, 'knight': 2542, 'wound': 4958, 'dark': 1183, 'turn': 4630, 'etc': 1585, 'theme': 4478, 'art': 322, 'teenag': 4446, 'sexual': 3969, 'uniqu': 4692, 'dare': 1182, 'receiv': 3618, 'littl': 2675, 'public': 3521, 'asid': 333, 'festiv': 1733, 'french': 1863, 'fond': 1807, 'everyday': 1603, 'subject': 4315, 'handl': 2085, 'often': 3142, 'premier': 3429, '2007': 64, 'award': 392, 'new': 3051, 'alreadi': 220, 'de': 1197, 'therefor': 4480, 'cinemat': 872, 'pace': 3209, 'age': 174, 'focu': 1801, 'mainli': 2753, 'differ': 1304, 'delic': 1239, 'perhap': 3291, 'bit': 524, 'offer': 3139, 'enough': 1549, 'scare': 3870, 'deep': 1224, 'layer': 2599, 'access': 118, 'everyon': 1604, 'teen': 4445, 'due': 1430, 'visual': 4798, 'approach': 298, 'authent': 380, 'feel': 1721, 'individu': 2323, 'abomin': 101, 'cinema': 871, 'five': 1771, 'fill': 1748, 'drug': 1421, 'induc': 2324, 'common': 964, 'rave': 3595, 'averag': 385, 'asylum': 354, 'resid': 3701, 'despit': 1278, 'color': 945, 'appear': 293, 'human': 2236, 'children': 842, 'start': 4233, 'innoc': 2349, 'poor': 3386, 'boy': 601, 'drum': 1422, 'wait': 4815, 'put': 3538, 'ye': 4979, 'fear': 1715, 'friend': 1868, 'quickli': 3549, 'weird': 4870, 'grant': 2011, 'accept': 117, 'combin': 951, 'wizard': 4932, 'oz': 3207, 'lose': 2698, 'warm': 4834, 'book': 579, 'foot': 1813, 'yellow': 4984, 'rel': 3656, 'befriend': 471, 'deeper': 1225, 'despair': 1274, 'pleasant': 3356, 'horrif': 2211, 'boat': 563, 'grandfath': 2009, 'clock': 911, 'dragon': 1396, 'ad': 142, 'mayhem': 2816, 'comfort': 956, 'chao': 805, 'villain': 4785, 'dub': 1425, 'plastic': 3349, 'mask': 2796, 'facial': 1665, 'express': 1650, 'deal': 1201, 'ga': 1901, 'lowest': 2713, 'favorit': 1710, 'nose': 3092, 'dive': 1359, 'bodi': 566, 'water': 4847, 'land': 2572, 'prefer': 3426, 'live': 2677, 'summari': 4345, 'terrifi': 4465, 'plane': 3346, 'flick': 1787, 'larg': 2577, 'choic': 847, 'across': 135, 'accident': 120, 'oh': 3143, 'vagu': 4735, 'trailer': 4577, 'forgot': 1826, 'cheesi': 829, 'nonsens': 3085, 'realiti': 3610, 'hell': 2138, 'ride': 3747, 'high': 2155, 'suspens': 4379, 'gradual': 2004, 'reach': 3600, 'climax': 906, 'liter': 2672, 'edg': 1471, 'seat': 3909, 'bite': 526, 'remain': 3670, 'nail': 3003, 'moral': 2952, 'drift': 1410, 'think': 4485, 'viciou': 4771, 'learn': 2607, 'contest': 1034, 'motiv': 2964, 'guess': 2051, 'pull': 3523, 'trigger': 4608, 'candid': 719, 'safe': 3836, 'remind': 3674, 'knew': 2540, 'happen': 2090, 'next': 3058, 'spoiler': 4197, 'chang': 803, 'farmer': 1696, 'idea': 2257, 'rich': 3741, 'felt': 1726, 'done': 1377, 'associ': 346, 'rule': 3818, 'die': 1303, 'serious': 3956, 'execut': 1625, 'given': 1960, 'chanc': 802, 'ie': 2264, 'interestingli': 2378, 'bother': 590, 'caus': 773, 'wish': 4925, 'popcorn': 3389, 'version': 4763, 'australian': 379, 'bloom': 554, 'gave': 1923, 'forc': 1817, 'mechan': 2830, 'shootout': 4006, 'flash': 1776, 'routin': 3807, 'tv': 4633, 'soap': 4117, 'forward': 1835, 'favour': 1711, 'beat': 461, 'career': 738, 'surprisingli': 4371, 'dud': 1427, 'unfunni': 4683, 'cold': 934, 'satir': 3860, 'fish': 1766, 'highlight': 2159, 'adapt': 144, 'superior': 4357, 'render': 3679, 'tragedi': 4574, 'televis': 4448, 'unit': 4693, 'channel': 804, 'epic': 1564, 'scale': 3867, 'emphasi': 1521, 'god': 1977, 'sacrific': 3832, 'queen': 3545, 'outstand': 3193, 'stun': 4308, 'bring': 635, 'blend': 544, 'dawn': 1193, 'strongli': 4297, 'intern': 2380, 'william': 4912, 'lead': 2602, 'dougla': 1387, 'lousi': 2707, 'lazi': 2600, 'allow': 215, 'fine': 1757, 'scriptwrit': 3901, 'abil': 99, 'dislik': 1342, 'despis': 1277, 'incred': 2315, 'talent': 4413, 'sexi': 3968, 'brilliant': 633, 'talk': 4414, 'laugh': 2586, 'detect': 1285, 'homicid': 2191, 'squad': 4210, 'heard': 2122, 'kenneth': 2516, 'citi': 878, 'panic': 3230, 'serial': 3954, 'women': 4935, 'mayor': 2817, 'jonathan': 2472, 'account': 125, 'failur': 1672, 'catch': 766, 'cheap': 823, 'vulgar': 4810, 'unattract': 4653, 'joan': 2459, 'secretari': 3914, 'file': 1747, 'letter': 2637, 'cat': 765, 'dog': 1369, 'blatantli': 541, 'togeth': 4532, 'decid': 1215, 'draw': 1401, 'dress': 1407, 'woman': 4934, 'drag': 1395, 'six': 4066, 'annoy': 267, 'similarli': 4040, 'powel': 3411, 'disguis': 1338, 'crazi': 1102, 'integr': 2370, 'genuin': 1937, 'whatev': 4883, 'razor': 3599, 'bulk': 669, 'plu': 3363, 'splendid': 4193, 'player': 3354, '30': 74, 'delight': 1241, 'touch': 4559, 'sentenc': 3945, 'donald': 1376, 'cop': 1058, 'mistaken': 2914, 'nearli': 3023, 'lynch': 2734, 'mob': 2921, 'ruth': 3827, 'jimmi': 2457, 'lui': 2724, 'grin': 2035, 'dave': 1190, 'small': 4101, 'barbara': 430, 'pepper': 3284, 'loath': 2681, 'note': 3097, 'familiar': 1685, '1930': 17, 'tend': 4455, 'glanc': 1963, 'imdb': 2277, 'rate': 3591, 'steven': 4251, 'alic': 205, 'walker': 4820, 'novel': 3103, 'share': 3981, 'commun': 965, 'treatment': 4596, 'white': 4892, 'rural': 3822, 'abus': 113, 'father': 1707, 'snatch': 4112, 'arm': 312, 'birth': 522, 'miser': 2905, 'sister': 4061, 'newcom': 3052, 'whoopi': 4896, 'marri': 2787, 'husband': 2247, 'danni': 1181, 'glover': 1973, 'humili': 2238, 'remov': 3677, 'heartbreak': 2124, 'keep': 2511, 'reunit': 3725, 'assort': 347, 'tough': 4560, 'step': 4247, 'son': 4138, 'singer': 4054, 'teach': 4433, 'stellar': 4246, 'purpl': 3533, 'manag': 2764, 'captur': 733, 'essenc': 1578, 'complic': 980, 'lesbian': 2630, 'african': 171, 'vivid': 4800, 'shine': 3998, 'respond': 3709, 'surround': 4373, 'suffic': 4335, 'left': 2614, 'late': 2581, 'night': 3066, 'cabl': 696, 'porn': 3392, 'hot': 2219, 'depart': 1256, 'humor': 2239, 'bed': 468, 'adult': 156, 'extrem': 1658, 'train': 4578, 'wreck': 4961, '14': 10, 'episod': 1565, 'bought': 593, 'amazon': 235, 'incid': 2303, 'fairli': 1676, 'gotten': 1999, 'job': 2460, 'forgotten': 1827, 'absolut': 109, 'horrend': 2208, 'multi': 2979, 'project': 3488, 'trek': 4598, 'thirti': 4487, 'ship': 3999, 'chase': 820, 'hilari': 2162, 'smoke': 4108, 'engin': 1540, 'rocket': 3776, 'fulli': 1885, 'jet': 2451, 'west': 4877, 'shoot': 4005, 'out': 3182, 'sci': 3882, 'fi': 1736, 'gene': 1929, 'regardless': 3645, 'hype': 2250, 'minut': 2900, 'disast': 1328, 'kate': 2505, 'ashley': 331, 'disappoint': 1327, 'kid': 2523, 'spend': 4176, 'n': 3002, 'wooden': 4940, 'dead': 1198, 'tree': 4597, 'half': 2072, 'fascin': 1699, 'let': 2636, 'supposedli': 4363, 'messag': 2857, 'indic': 2321, 'backward': 408, 'e': 1447, 'nobodi': 3075, 'anyway': 285, 'opinion': 3161, 'updat': 4714, 'read': 3603, 'surpass': 4369, 'titan': 4527, 'rent': 3681, 'crash': 1097, 'cheer': 827, 'beyond': 506, 'belief': 481, 'tie': 4512, 'drive': 1413, 'blow': 555, 'chines': 845, 'blond': 551, 'shirt': 4001, '4': 79, 'clash': 887, 'ensembl': 1550, 'worn': 4948, 'worri': 4949, 'surpris': 4370, '5': 83, 'necessari': 3026, 'howl': 2227, '6': 85, 'mental': 2848, 'convinc': 1051, 'poke': 3373, 'teacher': 4434, 'priest': 3455, 'bathroom': 450, 'boob': 578, 'heaven': 2128, 'pop': 3388, 'nois': 3078, 'fun': 1886, 'divers': 1360, 'height': 2133, 'plausibl': 3351, 'melodramat': 2841, 'intrigu': 2387, 'throw': 4503, 'materi': 2805, 'ignor': 2265, 'mostli': 2961, 'frequent': 1864, 'cruel': 1131, 'isabel': 2406, 'bone': 575, 'strip': 4291, 'blind': 547, 'ideal': 2258, 'scheme': 3879, 'whore': 4897, 'path': 3260, 'catherin': 769, 'dialog': 1294, 'trite': 4614, 'emot': 1517, 'emili': 1515, 'ralph': 3570, 'gothic': 1997, 'slightli': 4093, 'valid': 4738, 'mtv': 2976, 'goe': 1980, 'basic': 442, 'structur': 4299, 'beach': 456, 'favourit': 1712, 'mixtur': 2919, 'ingredi': 2338, 'horror': 2213, 'romant': 3785, 'outdat': 3184, 'minor': 2898, 'discov': 1333, 'relat': 3657, 'sometim': 4135, 'silli': 4037, 'balanc': 419, 'mood': 2948, 'check': 825, 'bernard': 496, 'rose': 3798, 'immort': 2284, 'belov': 486, 'glenn': 1966, 'british': 638, 'accent': 116, '1st': 54, '2003': 60, 'brad': 605, 'mindless': 2891, 'flesh': 1785, 'eat': 1463, 'zombi': 4997, 'chronicl': 864, '3d': 77, 'wow': 4959, 'underli': 4665, 'thrown': 4504, 'lame': 2569, 'twilight': 4637, 'zone': 4998, 'understood': 4671, 'report': 3689, 'investig': 2394, 'ghost': 1946, 'town': 4565, 'feast': 1716, 'occasion': 3129, 'attract': 368, 'raw': 3596, 'forbidden': 1816, 'societi': 4120, 'up': 4713, 'morbid': 2953, 'financi': 1755, 'determin': 1286, 'invit': 2396, 'addit': 147, 'hungri': 2243, 'wretch': 4965, 'suffer': 4334, 'yearn': 4982, 'poetri': 3368, 'flavor': 1780, 'curiou': 1146, 'lock': 2684, 'tradit': 4572, 'forget': 1822, 'dream': 1404, 'passion': 3257, 'motion': 2963, 'andr': 254, 'inner': 2348, 'philosophi': 3315, 'simon': 4042, 'amount': 246, 'enthusiast': 1558, 'grudg': 2047, 'rant': 3584, 'uneven': 4675, 'difficulti': 1306, 'will': 4910, 'haunt': 2111, 'baddi': 411, 'token': 4534, 'mysteri': 2998, 'semi': 3934, 'alert': 200, 'hide': 2153, 'cover': 1088, 'creepi': 1113, 'secondli': 3912, 'supernatur': 4359, 'brief': 629, 'freddi': 1858, 'jason': 2434, 'em': 1510, 'convent': 1046, 'troubl': 4618, 'vacat': 4733, 'suggest': 4338, 'happi': 2091, 'month': 2945, 'disregard': 1349, 'entri': 1562, 'introduct': 2389, 'depict': 1259, 'citizen': 879, 'kane': 2499, 'reel': 3638, 'unfortun': 4682, 'buy': 691, 'behav': 475, 'ethnic': 1588, 'noir': 3077, '15': 11, 'establish': 1580, 'capabl': 727, 'demand': 1246, '13': 8, 'dana': 1173, 'stroke': 4293, 'meant': 2826, 'cute': 1156, 'creek': 1111, 'monster': 2942, 'rescu': 3697, 'creatur': 1108, 'sceneri': 3877, 'neat': 3024, 'violenc': 4787, 'mild': 2878, 'karen': 2502, 'sarah': 3856, 'michel': 2869, 'exchang': 1621, 'student': 4303, 'japan': 2430, 'social': 4119, 'elderli': 1491, 'emma': 1516, 'grace': 2002, 'previou': 3449, 'disappear': 1326, 'home': 2188, 'rabbit': 3557, 'hole': 2179, 'mayb': 2815, 'influenc': 2334, 'success': 4328, 'underst': 4669, 'trend': 4600, '2005': 62, 'remark': 3672, 'exampl': 1617, 'remak': 3671, 'began': 473, '2000': 57, 'particular': 3249, 'close': 913, 'aka': 188, 'realm': 3613, 'drama': 1399, 'date': 1188, 'fifth': 1742, 'stephen': 4248, 'linear': 2662, 'overli': 3198, 'concern': 994, 'logic': 2686, 'instead': 2363, 'present': 3436, 'sequenc': 3951, 'atmospher': 357, 'depend': 1258, 'admittedli': 152, 'gap': 1914, 'thread': 4494, 'abandon': 97, 'poetic': 3367, 'metaphor': 2861, 'mainstream': 2754, 'suspend': 4378, 'unusu': 4711, 'properti': 3499, 'design': 1272, 'cloth': 917, 'match': 2803, 'exterior': 1655, 'enhanc': 1544, 'eeri': 1479, 'event': 1598, 'entertain': 1555, 'premis': 3431, 'absorb': 110, 'interview': 2384, 'pay': 3273, 'attent': 365, 'aris': 310, 'intim': 2385, 'knowledg': 2546, 'cultur': 1142, 'ponder': 3384, 'scari': 3873, 'clich': 899, 'territori': 4466, 'flashback': 1777, 'finish': 1760, 'mini': 2893, 'ridicul': 3749, 'studio': 4305, 'appropri': 299, 'ident': 2259, 'substitut': 4321, 'non': 3082, 'indian': 2320, 'spanish': 4162, 'cost': 1072, 'domest': 1373, 'profit': 3483, 'pick': 3324, 'bondag': 574, 'flat': 1779, 'banal': 424, 'setup': 3963, 'host': 2216, 'aforement': 168, 'drivel': 1414, 'distract': 1355, 'tack': 4403, 'snake': 4110, 'shut': 4022, 'surfac': 4367, 'kubrick': 2551, 'artist': 326, 'warren': 4839, 'superb': 4353, 'instruct': 2366, 'vote': 4807, 'campaign': 711, 'advis': 162, 'win': 4914, 'elect': 1492, 'identifi': 2260, 'respect': 3708, 'thin': 4483, 'ice': 2254, 'india': 2319, 'chill': 843, 'spine': 4186, 'addict': 146, 'dian': 1297, 'keaton': 2509, 'blown': 556, 'cassavet': 760, 'bigger': 511, 'ground': 2042, 'undoubtedli': 4674, 'middl': 2872, 'gordon': 1992, 'fail': 1671, 'steer': 4245, 'breakdown': 617, 'culmin': 1140, 'argu': 306, 'quiet': 3550, 'opposit': 3165, 'climact': 904, 'estrang': 1583, 'enlighten': 1546, 'weak': 4854, 'weaker': 4855, 'walken': 4819, 'irish': 2401, 'desper': 1275, 'discoveri': 1334, 'giallo': 1947, 'red': 3631, 'winner': 4919, 'guilt': 2054, 'ridden': 3746, 'unexpectedli': 4677, 'italian': 2414, '70': 88, 'bridg': 627, 'convolut': 1053, 'held': 2135, 'compos': 982, 'memor': 2844, 'issu': 2412, 'respons': 3710, 'easi': 1457, 'spot': 4204, 'second': 3910, 'rush': 3823, 'confus': 1007, 'spell': 4175, 'loss': 2700, 'explain': 1640, 'senseless': 3940, 'monologu': 2940, 'unconvinc': 4659, 'realism': 3608, 'verbal': 4760, 'construct': 1026, 'intent': 2375, 'protagonist': 3505, 'unbeliev': 4656, 'consum': 1027, 'literatur': 2674, 'bibl': 508, 'encount': 1529, 'applaud': 295, 'chick': 836, 'threw': 4498, 'onto': 3157, 'street': 4283, 'whine': 4890, 'self': 3930, 'piti': 3339, 'banter': 428, 'pretenti': 3445, 'ancient': 251, 'greek': 2025, 'revel': 3727, 'doom': 1381, 'form': 1828, 'bar': 429, 'code': 929, 'hand': 2083, 'spooki': 4202, 'biblic': 509, 'cleverli': 898, 'expos': 1647, 'empti': 1525, 'space': 4157, 'melodrama': 2840, 'tediou': 4444, 'avail': 383, 'content': 1033, 'mouth': 2969, 'comparison': 970, 'warrant': 4838, 'peter': 3305, 'absurd': 111, 'situat': 4065, 'poorli': 3387, 'drew': 1408, 'heartwarm': 2126, 'alik': 208, '60': 86, 'walk': 4818, 'window': 4917, 'hum': 2235, 'anim': 262, 'bare': 431, 'crappi': 1096, 'dull': 1433, 'fell': 1724, 'asleep': 335, 'sleep': 4086, 'grade': 2003, 'holiday': 2181, 'rip': 3754, '4th': 82, 'ok': 3145, 'room': 3792, '3rd': 78, 'plain': 3344, 'succeed': 4327, 'goal': 1976, 'pan': 3229, 'track': 4569, 'smooth': 4109, 'relax': 3659, 'attend': 364, 'detail': 1284, 'describ': 1268, 'soundtrack': 4150, 'musician': 2991, 'band': 425, 'contribut': 1041, 'sunni': 4350, 'hear': 2121, 'rock': 3775, 'melodi': 2839, 'strength': 4285, 'lift': 2650, 'flow': 1796, 'hair': 2070, 'afternoon': 172, 'pass': 3253, 'crowd': 1127, 'door': 1382, 'met': 2859, 'week': 4867, 'kelli': 2513, 'anchor': 250, 'sinatra': 4050, 'kathryn': 2506, 'grayson': 2018, 'loui': 2705, 'b': 399, 'inspir': 2358, 'choreograph': 852, 'becam': 466, 'joe': 2462, 'fred': 1857, 'astair': 350, 'danc': 1174, 'rival': 3761, 'jerri': 2447, 'mous': 2968, 'walt': 4824, 'disney': 1345, 'mickey': 2870, 'lend': 2624, 'roger': 3779, 'mgm': 2866, 'tom': 4538, 'dancer': 1175, 'strive': 4292, 'dozen': 1392, 'prais': 3414, 'creativ': 1106, 'burt': 680, 'etern': 1586, 'contract': 1037, 'insist': 2356, 'jule': 2485, 'sammi': 3848, 'fall': 1680, 'accompani': 122, 'piano': 3323, 'bowl': 598, 'van': 4743, 'lyric': 2735, 'lifetim': 2649, 'prove': 3509, 'sudden': 4331, 'basi': 441, 'circumst': 877, 'orphan': 3175, 'nephew': 3042, 'dean': 1204, 'fix': 1772, 'audit': 372, 'accomplish': 123, 'pamela': 3228, 'waitress': 4816, 'brooklyn': 648, 'review': 3730, 'meg': 2836, 'broadway': 642, 'brought': 651, 'suspicion': 4381, 'cut': 1155, 'market': 2785, 'retir': 3721, 'countri': 1080, 'debt': 1208, 'extraordinari': 1657, 'straightforward': 4276, 'vs': 4809, 'apart': 287, 'movement': 2971, 'flee': 1783, 'techniqu': 4441, 'fire': 1761, 'clearli': 895, 'devot': 1293, 'distress': 1356, 'variou': 4751, 'pure': 3532, 'regard': 3644, 'moron': 2958, 'fantasi': 1690, 'ador': 155, 'bakshi': 418, 'babe': 400, 'yeah': 4980, 'greatest': 2021, 'anyon': 283, 'robot': 3773, 'vh': 4767, 'capit': 729, 'tremend': 4599, 'charm': 819, 'parker': 3242, 'posey': 3399, 'vehicl': 4755, 'ms': 2974, 'daisi': 1165, 'von': 4806, 'trip': 4611, 'lower': 2712, 'manhattan': 2765, 'dedic': 1220, 'club': 920, 'constantli': 1024, 'naiv': 3004, 'area': 305, 'meat': 2829, 'pretens': 3444, 'earth': 1455, 'rest': 3711, 'suck': 4329, 'resort': 3706, 'reli': 3663, 'smart': 4103, 'con': 989, 'occur': 3132, 'enter': 1553, 'conveni': 1045, 'assembl': 341, 'slap': 4078, 'rain': 3567, 'insult': 2368, 'hatr': 2110, 'defi': 1231, 'creat': 1104, 'improv': 2296, 'reminisc': 3675, 'pleasantli': 3357, 'profession': 3480, 'impress': 2294, 'prison': 3466, 'camp': 710, 'mother': 2962, 'grim': 2034, 'delet': 1237, 'harsh': 2104, 'hint': 2164, 'outright': 3191, 'shown': 4020, 'extent': 1654, 'sadli': 3835, 'gain': 1906, 'huge': 2231, 'fair': 1674, 'slip': 4095, 'crack': 1092, 'reluct': 3668, 'smaller': 4102, 'core': 1062, 'fool': 1811, 'hero': 2146, 'food': 1810, 'deadli': 1199, 'photographi': 3320, 'digit': 1308, 'puppet': 3529, 'stick': 4253, 'pose': 3398, 'heavi': 2129, 'companion': 968, 'analyz': 249, 'philosoph': 3314, 'angl': 259, 'continu': 1036, 'stream': 4281, 'custom': 1154, 'harder': 2096, 'needless': 3031, 'multipl': 2980, 'insid': 2353, 'introduc': 2388, 'unfold': 4680, 'crude': 1130, 'rang': 3581, 'sublim': 4316, 'cake': 700, 'cow': 1089, 'wear': 4860, 'chain': 794, 'gang': 1912, 'bull': 670, 'regular': 3649, 'besid': 497, 'wacki': 4813, 'exagger': 1615, 'contradict': 1038, 'tone': 4543, 'burst': 679, 'lush': 2731, 'bewar': 505, 'extend': 1652, 'subtitl': 4322, 'suffici': 4336, 'miss': 2910, 'halfway': 2073, 'steam': 4243, 'dutch': 1439, 'sub': 4314, 'contain': 1029, 'congratul': 1008, 'marvel': 2795, 'immens': 2281, 'translat': 4584, 'float': 1791, 'worthless': 4955, 'justic': 2494, 'specif': 4170, 'europ': 1590, 'western': 4878, 'dri': 1409, 'spirit': 4188, '0': 0, 'confess': 1001, 'r': 3556, 'christma': 862, 'equal': 1567, 'fault': 1708, 'hill': 2163, 'thoroughli': 4490, 'promis': 3491, 'build': 667, 'tension': 4459, 'occas': 3128, 'importantli': 2291, 'robin': 3771, 'obsess': 3123, 'potenti': 3406, 'explan': 1641, 'sassi': 3857, 'internet': 2381, 'parent': 3239, 'hire': 2167, 'kim': 2529, 'ha': 2064, 'background': 407, 'obstacl': 3124, 'martial': 2792, 'thug': 4507, 'sick': 4024, 'thank': 4473, 'chemistri': 830, 'energi': 1537, 'wise': 4924, 'likabl': 2653, 'climb': 907, 'ill': 2268, 'fate': 1706, 'romero': 3787, 'overr': 3201, 'shoe': 4004, 'biggest': 512, 'segment': 3927, 'kennedi': 2515, 'automat': 382, 'statu': 4239, 'hokey': 2177, 'belong': 485, 'revers': 3729, 'abort': 103, 'moder': 2927, 'stretch': 4287, 'pad': 3214, 'fourth': 1841, 'bunch': 675, 'loi': 2687, 'bond': 573, 'holli': 2182, 'difficult': 1305, 'sympath': 4396, 'road': 3764, 'attack': 361, 'presenc': 3435, 'none': 3083, 'sympathet': 4397, 'paul': 3268, 'dumb': 1434, 'oil': 3144, 'slick': 4089, 'homeless': 2189, 'ranger': 3582, 'lurk': 2730, 'lake': 2568, 'alter': 223, 'ego': 1484, 'pointless': 3371, 'punch': 3526, 'whatsoev': 4884, 'worthwhil': 4956, 'rental': 3682, 'aunt': 374, 'texa': 4471, 'copi': 1060, 'research': 3698, 'stanwyck': 4228, 'promot': 3492, 'recruit': 3629, 'australia': 378, 'unimagin': 4687, 'wet': 4879, 'bleak': 542, 'anywher': 286, 'welcom': 4871, 'gore': 1993, '2006': 63, 'feed': 1720, 'formula': 1831, 'eli': 1500, 'roth': 3802, 'earlier': 1452, 'central': 786, 'theatr': 4476, 'ryan': 3829, 'nicholson': 3061, 'previous': 3450, '45': 81, 'rape': 3586, 'reunion': 3724, 'hesit': 2149, 'resolut': 3703, 'clark': 886, 'leap': 2606, 'afraid': 169, 'destini': 1280, 'superman': 4358, 'green': 2026, 'arrow': 321, 'readi': 3605, 'save': 3864, 'thrill': 4499, 'loyalti': 2715, 'friendship': 1870, 'ensu': 1551, 'strictli': 4288, 'push': 3537, 'contrast': 1040, 'lex': 2640, 'nine': 3070, 'born': 586, 'season': 3908, 'reincarn': 3654, 'toward': 4563, 'ludicr': 2722, 'unrel': 4706, 'eight': 1485, 'exact': 1613, 'witti': 4930, 'alley': 213, 'wayn': 4852, 'arriv': 319, 'perri': 3296, 'process': 3474, 'bear': 458, 'gift': 1949, 'bottl': 591, 'debat': 1207, 'nobl': 3074, 'assassin': 339, 'exhibit': 1628, 'altogeth': 227, 'duke': 1432, 'await': 388, 'instinct': 2364, 'pack': 3212, 'franchis': 1847, 'romanc': 3784, 'sex': 3967, 'confront': 1006, 'mistak': 2913, 'everywher': 1606, '2001': 58, 'news': 3056, 'declar': 1217, 'drop': 1418, 'glamor': 1962, 'reliabl': 3664, 'sharon': 3983, 'stone': 4266, 'word': 4942, 'storytel': 4274, 'friendli': 1869, 'clean': 893, 'c': 694, 'mon': 2936, 'escap': 1576, 'seldom': 3928, 'academi': 115, 'loos': 2695, 'shall': 3976, 'lust': 2732, 'partner': 3252, 'suspect': 4377, 'dr': 1393, 'drawn': 1402, 'entranc': 1561, 'seduc': 3918, 'weav': 4863, 'danger': 1178, 'seduct': 3919, 'plate': 3350, 'erot': 1573, 'briefli': 630, 'notch': 3096, 'awesom': 395, 'chosen': 856, 'prize': 3468, 'folk': 1805, 'south': 4153, 'educ': 1477, 'relationship': 3658, 'practic': 3413, 'assum': 348, 'africa': 170, 'glad': 1961, 'bill': 517, 'ted': 4443, 'threat': 4495, 'futur': 1897, 'chuck': 865, 'send': 3936, 'pair': 3220, 'winter': 4920, 'reev': 3639, 'delici': 1240, 'bogu': 567, 'entir': 1559, 'justifi': 2495, 'stake': 4218, 'dimens': 1312, 'lovabl': 2708, 'creator': 1107, 'heavili': 2130, 'travel': 4592, 'object': 3117, 'evid': 1607, 'predecessor': 3424, 'steadi': 4241, '1987': 42, 'judg': 2482, 'ben': 489, 'crime': 1116, 'commit': 963, 'maria': 2780, 'govern': 2000, 'mass': 2798, 'pit': 3337, 'convict': 1050, 'colour': 946, 'oppon': 3162, 'electr': 1493, 'suit': 4340, 'chainsaw': 795, 'zero': 4995, 'gruesom': 2048, 'obligatori': 3119, 'join': 2468, 'group': 2043, 'freedom': 1860, 'fighter': 1745, 'author': 381, 'twenti': 4635, 'arnold': 315, 'univers': 4694, 'guarante': 2049, 'box': 599, 'sidekick': 4028, 'liner': 2663, 'minu': 2899, 'massiv': 2800, 'faith': 1677, 'blockbust': 550, 'decad': 1210, 'devoid': 1292, 'technic': 4439, 'merit': 2852, 'cult': 1141, 'shoddi': 4003, 'unintent': 4689, 'giggl': 1950, 'secret': 3913, 'tape': 4420, 'crap': 1095, 'futurist': 1898, 'advertis': 160, 'amus': 247, 'distinct': 1352, 'technolog': 4442, 'advanc': 157, 'utter': 4729, 'rubbish': 3813, 'uncl': 4657, 'darn': 1185, 'proud': 3508, 'commentari': 961, 'languag': 2576, 'cent': 783, 'sorrow': 4146, 'york': 4988, 'independ': 2317, 'demon': 1249, 'eric': 1572, 'amazingli': 234, 'okay': 3146, 'hundr': 2241, 'thousand': 4493, 'breathtak': 621, 'took': 4547, 'stuck': 4302, 'fleet': 1784, 'tune': 4626, 'virtual': 4792, 'forgett': 1823, 'ginger': 1956, 'sheer': 3988, 'convey': 1049, 'magic': 2747, 'target': 4423, '50': 84, 'stumbl': 4307, 'clerk': 896, 'notic': 3099, 'brilliantli': 634, 'paper': 3232, 'agent': 177, 'wherea': 4887, 'seemingli': 3925, 'poignant': 3369, 'craft': 1093, 'honest': 2193, 'indi': 2318, 'inevit': 2328, 'task': 4425, 'explicit': 1642, 'load': 2680, 'fat': 1704, 'admir': 150, 'tear': 4436, 'desir': 1273, 'iii': 2267, 'vision': 4795, 'legend': 2618, 'principl': 3463, 'redempt': 3633, 'sweat': 4386, 'jar': 2432, 'ii': 2266, 'wander': 4826, 'initi': 2343, 'gorgeou': 1994, 'choos': 849, 'suddenli': 4332, 'stay': 4240, 'quietli': 3551, 'fluid': 1799, 'creep': 1112, 'costum': 1073, 'wig': 4906, 'makeup': 2759, 'timeless': 4520, 'function': 1887, 'anybodi': 281, 'rear': 3614, 'rehears': 3651, 'snap': 4111, 'requir': 3696, 'narrat': 3009, 'emerg': 1514, 'weari': 4861, 'split': 4194, 'digniti': 1309, 'lip': 2667, 'growth': 2046, 'carlo': 746, 'choru': 854, 'whip': 4891, 'led': 2612, 'arrest': 318, 'enorm': 1548, 'bleed': 543, 'curs': 1149, 'system': 4400, 'fanci': 1689, 'satisfi': 3861, 'leagu': 2604, 'devil': 1291, 'tackl': 4405, 'gritti': 2038, 'civil': 880, 'spawn': 4166, 'outlaw': 3188, 'maintain': 2755, 'tendenc': 4456, 'manipul': 2768, 'greatli': 2022, 'accuraci': 127, 'cri': 1115, 'trail': 4576, 'jake': 2424, 'lee': 2613, 'lawrenc': 2596, 'massacr': 2799, 'armi': 313, 'switch': 4393, 'glare': 1964, 'glori': 1970, 'fiction': 1738, 'context': 1035, 'repres': 3690, 'larger': 2578, 'curios': 1145, 'slave': 4083, 'rebel': 3616, 'singl': 4055, 'union': 4691, 'rais': 3568, 'kansa': 2500, 'senat': 3935, 'jim': 2456, 'lane': 2574, 'fame': 1683, 'organ': 3171, 'troop': 4617, 'spring': 4208, 'utterli': 4730, 'downhil': 1390, 'jack': 2418, 'sink': 4057, 'appal': 290, 'yard': 4977, 'locat': 2683, 'privat': 3467, 'hawk': 2112, 'unnecessari': 4699, 'casual': 764, 'compet': 974, 'ram': 3571, 'portion': 3395, 'watchabl': 4845, '1980': 36, 'class': 888, 'document': 1367, 'celebr': 778, 'babi': 401, 'boom': 580, 'guilti': 2055, 'pleasur': 3358, 'sophist': 4142, 'cari': 741, 'gotta': 1998, 'blast': 539, 'rex': 3737, 'dinosaur': 1316, 'rough': 3804, 'factor': 1668, 'magnific': 2749, 'hors': 2214, 'parodi': 3243, 'evok': 1609, 'tribe': 4604, 'paradis': 3235, 'miseri': 2906, 'poverti': 3410, 'broken': 644, 'lay': 2598, 'ya': 4976, 'scratch': 3894, 'funniest': 1893, 'trial': 4602, 'mormon': 2956, 'church': 867, 'inaccuraci': 2300, 'joseph': 2475, 'prophet': 3501, 'firmli': 1763, 'saint': 3841, 'confid': 1002, 'smith': 4107, 'whether': 4888, 'awkward': 397, 'embarrass': 1512, 'warmth': 4835, 'lampoon': 2570, 'inclus': 2306, 'hugh': 2232, 'mess': 2856, 'root': 3795, 'twice': 4636, 'vampir': 4742, 'slasher': 4081, 'nonetheless': 3084, 'nurs': 3114, 'charli': 816, 'obscur': 3121, 'neck': 3028, 'intend': 2373, 'cycl': 1158, 'tongu': 4544, 'cheek': 826, 'popul': 3390, 'grew': 2030, 'brazil': 614, 'visit': 4796, 'coast': 926, 'mile': 2881, 'explor': 1645, 'nativ': 3016, 'upper': 4717, 'trade': 4570, 'nake': 3005, 'eaten': 1464, 'ritual': 3760, 'recreat': 3628, 'villag': 4784, 'cannib': 721, 'recogn': 3621, 'garden': 1917, 'messi': 2858, 'violent': 4788, 'pathet': 3261, 'weapon': 4859, 'greed': 2023, 'savag': 3863, 'neither': 3038, 'dismiss': 1344, 'air': 185, 'affair': 164, 'achiev': 130, 'hook': 2199, 'jeffrey': 2442, 'alexandr': 203, 'paranoia': 3238, 'soviet': 4155, 'appar': 291, 'behaviour': 477, 'communist': 966, 'sutherland': 4383, 'fbi': 1714, 'silenc': 4035, 'shade': 3971, 'landscap': 2573, 'ventur': 4759, 'inspector': 2357, 'underworld': 4673, 'cameo': 707, 'domin': 1374, 'partial': 3247, 'cube': 1138, 'be': 455, 'bank': 427, 'jennif': 2444, 'neo': 3041, 'leader': 2603, 'cole': 935, 'laurenc': 2593, 'fishburn': 1767, 'scienc': 3883, 'professor': 3481, 'et': 1584, 'directli': 1319, 'search': 3907, 'gilbert': 1951, 'twist': 4639, 'doubl': 1385, 'higher': 2156, 'illus': 2271, 'passag': 3255, 'alexand': 202, 'reject': 3655, 'iron': 2402, 'artifici': 325, 'longer': 2693, 'smash': 4104, 'tini': 4523, 'reader': 3604, 'jean': 2439, 'unabl': 4652, 'creation': 1105, 'awar': 391, 'parallel': 3236, 'suicid': 4339, 'nevertheless': 3050, 'crystal': 1135, 'condemn': 998, 'l': 2558, 'liber': 2642, 'le': 2601, 'florida': 1795, 'recal': 3617, 'filmmak': 1751, 'offici': 3141, 'ninja': 3072, 'epitom': 1566, 'wick': 4899, 'dane': 1177, 'deniro': 1252, 'fabul': 1663, 'younger': 4990, 'older': 3149, 'subtleti': 4324, 'preview': 3448, 'fianc': 1737, 'skeptic': 4069, 'assur': 349, 'fenc': 1731, 'destroy': 1281, 'templ': 4451, 'helpless': 2142, 'bela': 480, 'lugosi': 2723, 'mislead': 2909, 'someday': 4131, 'heck': 2131, 'fantast': 1691, 'transplant': 4585, 'marriag': 2788, 'surgeri': 4368, 'medic': 2832, 'phenomenon': 3310, 'simplist': 4046, 'coincid': 933, 'harold': 2101, 'jessica': 2449, 'super': 4352, 'lena': 2623, 'lanc': 2571, 'stronger': 4295, 'quot': 3555, 'quinn': 3552, 'quick': 3548, 'richardson': 3743, 'raymond': 3598, 'scott': 3892, 'ray': 3597, '1936': 19, 'storylin': 4273, 'canyon': 725, 'kevin': 2519, 'kline': 2538, 'acclaim': 121, 'depress': 1260, 'sketch': 4070, 'gari': 1918, 'underr': 4668, 'gold': 1982, 'shelf': 3990, 'nick': 3062, 'honesti': 2194, 'eddi': 1470, 'kay': 2507, 'thoma': 4488, 'pie': 3327, 'jone': 2473, 'matter': 2809, 'phillip': 3313, 'everybodi': 1602, 'affect': 165, '1997': 51, '1970': 27, '1960': 24, 'departur': 1257, 'bikini': 515, 'pirat': 3336, 'aim': 184, 'squar': 4211, 'kiddi': 2524, 'invis': 2395, 'dad': 1162, 'dee': 1221, 'wallac': 4822, 'prepar': 3432, 'josh': 2476, 'experienc': 1637, 'co': 924, 'ten': 4453, 'blame': 536, 'corner': 1065, 'tiresom': 4526, 'lacklust': 2564, 'homosexu': 2192, 'dash': 1187, 'page': 3215, 'echo': 1467, 'fade': 1670, 'cinematograph': 873, 'tire': 4525, 'pray': 3416, 'fit': 1770, 'connect': 1009, 'via': 4768, 'reward': 3736, 'scope': 3888, 'immers': 2282, 'pool': 3385, 'wrench': 4962, 'laughter': 2588, 'ass': 338, 'sorri': 4145, 'blunt': 558, 'toler': 4537, 'nowher': 3105, 'consequ': 1017, 'ugli': 4645, 'uninspir': 4688, 'bland': 537, 'wash': 4841, 'trash': 4588, 'stink': 4259, 'field': 1740, 'northern': 3091, 'alongsid': 219, 'ordinari': 3170, 'circl': 875, 'abund': 112, 'w': 4812, 'forev': 1821, 'threaten': 4496, 'bomb': 572, 'jail': 2423, 'tabl': 4401, 'approv': 300, 'charismat': 814, 'sell': 3932, 'brendan': 624, 'antic': 275, 'section': 3916, 'paint': 3218, 'dealer': 1202, 'robberi': 3768, 'protest': 3507, 'pressur': 3440, 'mount': 2966, 'jon': 2471, 'voight': 4803, 'hammi': 2081, 'sale': 3843, 'tarzan': 4424, 'ape': 288, 'sensat': 3939, 'matrix': 2807, 'lord': 2696, 'radio': 3565, 'instant': 2361, 'icon': 2255, 'unhappi': 4684, 'edgar': 1472, 'liberti': 2643, 'compani': 967, 'spectacular': 4172, 'mate': 2804, 'anticip': 276, 'limit': 2657, 'jane': 2429, 'swim': 4391, 'topless': 4552, 'sullivan': 4342, 'shortli': 4011, 'san': 3850, 'headach': 2116, 'plant': 3348, 'fruit': 1877, 'block': 549, 'breast': 619, 'conclus': 997, 'jungl': 2491, 'hay': 2113, 'veteran': 4766, 'incorpor': 2311, 'beast': 460, 'neil': 3036, 'hamilton': 2078, 'eleph': 1497, 'england': 1541, 'pursuit': 3536, 'stalk': 4220, 'refus': 3643, 'hunter': 2245, 'demis': 1248, 'voyag': 4808, 'hideou': 2154, 'health': 2118, 'recov': 3627, 'stock': 4262, 'extens': 1653, 'racial': 3560, 'stereotyp': 4249, 'peak': 3276, 'condit': 999, 'physic': 3322, 'we': 4853, 'craven': 1098, 'expens': 1635, 'preposter': 3433, 'airport': 187, 'flirt': 1790, 'trap': 4587, 'row': 3808, 'contact': 1028, 'confin': 1003, 'stranger': 4280, 'immedi': 2280, 'effici': 1481, 'rachel': 3559, 'daddi': 1163, 'forgiv': 1824, 'slaughter': 4082, 'gal': 1907, 'unreal': 4704, 'hold': 2178, 'harm': 2099, 'conclud': 996, 'altern': 224, 'terrorist': 4468, 'missil': 2911, 'employ': 1523, 'highway': 2160, 'politician': 3383, 'miami': 2867, 'hotel': 2220, 'secur': 3917, 'steal': 4242, 'transport': 4586, 'instantli': 2362, 'arrang': 317, 'sensibl': 3941, 'coffe': 930, 'elsewher': 1507, 'magazin': 2745, 'termin': 4461, 'murphi': 2986, 'alli': 214, 'warn': 4836, 'damn': 1170, 'wake': 4817, 'regist': 3647, 'mildli': 2879, 'ensur': 1552, 'duo': 1436, 'vast': 4752, 'ruthless': 3828, 'cook': 1054, 'dear': 1205, 'wendigo': 4874, 'strand': 4278, 'deer': 1227, 'angri': 260, 'outrag': 3190, 'kudo': 2552, 'thumb': 4508, 'dimension': 1313, 'cardboard': 736, 'empathi': 1518, 'par': 3233, 'bean': 457, 'tight': 4515, 'shock': 4002, 'deliver': 1243, 'precis': 3422, 'reynold': 3738, 'merci': 2850, 'river': 3762, 'illustr': 2272, 'duel': 1431, 'truck': 4619, 'guitar': 2057, 'bobbi': 565, 'genet': 1931, 'shake': 3973, 'jam': 2425, 'session': 3960, 'buck': 660, 'redneck': 3634, 'appl': 294, 'orang': 3168, 'ed': 1469, 'law': 2595, 'tragic': 4575, 'gratuit': 2015, 'numb': 3110, 'cliff': 902, 'bottom': 592, 'testament': 4470, 'childish': 841, 'bait': 415, 'weekend': 4868, 'hardi': 2097, 'lone': 2690, 'usa': 4722, 'clint': 909, 'eastwood': 1462, 'drown': 1420, 'cowboy': 1090, 'solid': 4126, 'heat': 2127, 'memori': 2845, 'boast': 562, 'polish': 3381, 'desert': 1270, 'mutant': 2994, 'soldier': 4124, 'hardcor': 2095, 'preciou': 3421, 'resembl': 3699, 'brian': 625, 'thompson': 4489, 'stan': 4223, 'cave': 774, 'ala': 192, 'descent': 1267, 'sam': 3847, 'laughabl': 2587, 'contriv': 1042, 'yesterday': 4985, 'adam': 143, 'pat': 3259, 'mike': 2877, 'bachelor': 404, 'nut': 3115, 'brenda': 623, 'defeat': 1228, 'offens': 3138, 'test': 4469, 'gay': 1924, 'othello': 3178, 'jealou': 2438, 'suspici': 4380, 'enthusiasm': 1557, 'competit': 975, 'gregori': 2029, 'peck': 3278, 'gentleman': 1936, 'shakespear': 3974, 'terrif': 4464, 'worthi': 4954, 'maker': 2758, '1950': 23, 'qualifi': 3542, 'g': 1900, 'skin': 4072, 'commerci': 962, 'gather': 1922, '11': 6, 'promin': 3490, 'global': 1968, 'ken': 2514, 'jewel': 2453, 'medium': 2834, 'profound': 3484, 'arab': 302, 'inflict': 2333, 'amongst': 244, 'chapter': 807, 'v': 4732, 'websit': 4865, 'holm': 2185, 'ladi': 2566, 'stiff': 4254, 'tiger': 4514, 'hidden': 2152, 'intellig': 2372, 'moreov': 2954, 'sacrif': 3831, 'duck': 1426, 'bulli': 672, 'thick': 4481, 'golden': 1985, 'com': 948, 'user': 4725, 'nyc': 3116, 'screenwrit': 3898, 'gandhi': 1911, 'riot': 3753, 'count': 1076, 'furi': 1894, 'bang': 426, 'random': 3579, 'exercis': 1626, 'meaningless': 2825, 'artsi': 327, 'indulg': 2325, 'brand': 609, '13th': 9, 'rise': 3755, 'comed': 953, 'confirm': 1004, 'profil': 3482, 'upon': 4716, 'superfici': 4355, 'denni': 1253, 'hopper': 2206, 'stomach': 4265, 'smile': 4106, 'spiritu': 4189, 'junk': 2493, 'moon': 2950, 'irrit': 2405, 'chri': 857, 'penn': 3281, 'dirti': 1323, 'cancer': 717, 'unexpect': 4676, 'jump': 2489, 'clumsi': 923, 'till': 4516, 'spoil': 4196, 'tast': 4426, 'devic': 1290, 'rivet': 3763, 'excus': 1624, 'hey': 2151, 'incoher': 2307, 'inconsist': 2310, 'character': 809, 'narr': 3008, 'adolesc': 153, 'butler': 688, 'grown': 2045, 'attitud': 366, 'consist': 1021, 'religion': 3667, 'charisma': 813, 'sin': 4049, 'colleg': 942, 'topic': 4551, 'nuditi': 3109, 'cerebr': 788, 'bush': 682, 'hypnot': 2251, 'easier': 1458, 'decor': 1219, 'aesthet': 163, 'summer': 4346, 'defin': 1232, 'sappi': 3855, 'sentiment': 3946, 'solut': 4128, 'credibl': 1109, 'giant': 1948, 'anna': 264, 'simpson': 4047, 'indiffer': 2322, 'likewis': 2655, 'compliment': 981, 'amanda': 230, 'jenni': 2443, 'maid': 2750, 'wannab': 4829, 'homer': 2190, 'whenev': 4886, 'shallow': 3977, 'abysm': 114, 'primarili': 3457, 'swear': 4385, 'cue': 1139, 'card': 735, 'mst3k': 2975, 'toni': 4545, 'carradin': 750, 'seventi': 3965, 'vanish': 4746, 'columbo': 947, 'whoever': 4893, 'per': 3285, 'pilot': 3332, 'alleg': 211, 'aggress': 178, 'weakest': 4856, 'soft': 4122, 'somebodi': 4130, 'shower': 4019, 'bride': 626, 'jan': 2428, 'sight': 4031, 'shed': 3987, 'bloodi': 553, 'tail': 4409, 'fake': 1678, 'spin': 4184, 'chair': 796, 'gonna': 1988, 'f': 1661, 'pig': 3330, 'latter': 2585, 'former': 1830, 'increas': 2313, 'trace': 4567, 'comprehend': 984, 'morgan': 2955, 'freeman': 1861, 'vega': 4754, 'whilst': 4889, 'petti': 3306, 'woodi': 4941, 'allen': 212, 'anni': 265, 'hall': 2074, 'dinner': 1315, 'ex': 1612, '28': 71, 'wind': 4916, 'mitchel': 2917, 'knightley': 2543, 'paltrow': 3227, 'austen': 376, 'incorrect': 2312, 'flashi': 1778, 'inaccur': 2299, 'exclus': 1623, 'fals': 1682, 'programm': 3486, '40': 80, 'wwii': 4973, 'undertak': 4672, 'muddl': 2978, 'composit': 983, 'wildli': 4909, 'overwhelm': 3202, 'broadcast': 641, 'announc': 266, 'peac': 3275, 'spare': 4163, 'audrey': 373, 'hepburn': 2145, 'ballet': 422, 'revolutionari': 3734, 'swept': 4390, 'russian': 3826, 'clueless': 922, 'branagh': 608, 'commend': 959, 'represent': 3691, 'captiv': 732, 'vocal': 4801, 'island': 2408, 'matthew': 2811, 'fox': 1842, 'j': 2417, 'pun': 3525, 'elvira': 1509, 'jodi': 2461, 'fanat': 1688, 'tender': 4457, 'mail': 2751, 'p': 3208, 'edi': 1474, 'alec': 199, 'guin': 2056, 'invest': 2393, 'sir': 4058, 'brood': 646, 'aspir': 337, 'vain': 4736, 'jerk': 2446, 'turkey': 4629, 'estat': 1581, 'obnoxi': 3120, 'pari': 3240, 'runner': 3821, 'concentr': 992, 'handsom': 2086, 'tick': 4510, 'sissi': 4060, 'onlin': 3156, '1994': 48, 'exquisit': 1651, 'kinda': 2531, 'stilt': 4257, 'censor': 782, 'wilson': 4913, 'polici': 3380, 'corpor': 1067, 'presid': 3438, 'dan': 1172, 'dick': 1300, 'lol': 2688, 'cusack': 1152, 'daili': 1164, 'vengeanc': 4757, 'resolv': 3704, 'priceless': 3453, 'gross': 2040, 'matur': 2812, 'butt': 689, 'media': 2831, 'north': 3090, 'combat': 950, 'invad': 2390, 'journalist': 2477, 'frustrat': 1878, 'conserv': 1018, 'patriot': 3266, 'ideolog': 2261, 'mediocr': 2833, 'mill': 2884, 'dude': 1428, 'minimum': 2895, 'energet': 1536, 'wire': 4922, 'footbal': 1815, 'pretend': 3443, 'theatric': 4477, 'fist': 1769, 'menac': 2847, 'ear': 1449, 'humbl': 2237, 'ought': 3181, 'wheel': 4885, 'grave': 2016, 'parson': 3244, 'sum': 4343, 'slice': 4088, 'hippi': 2166, 'consider': 1020, 'sunday': 4348, 'luck': 2719, 'solo': 4127, 'mytholog': 3001, 'jaw': 2435, 'der': 1262, 'remot': 3676, 'ban': 423, 'germani': 1942, 'closest': 915, 'descript': 1269, 'method': 2862, 'degre': 1235, 'collaps': 938, 'occup': 3130, 'variat': 4749, 'filler': 1749, 'partli': 3251, 'drain': 1397, 'baffl': 413, 'awe': 394, 'disgust': 1339, 'faint': 1673, 'cd': 776, 'lo': 2679, 'essenti': 1579, 'cloud': 918, 'border': 582, 'spoken': 4199, 'unravel': 4703, 'challeng': 797, 'volum': 4804, 'degrad': 1234, 'particip': 3248, 'explos': 1646, 'finger': 1759, 'button': 690, '1974': 31, 'gem': 1927, 'buddi': 662, 'exit': 1630, 'elev': 1498, 'claud': 891, 'patrick': 3265, 'harvey': 2107, 'spacey': 4158, 'lectur': 2611, 'marshal': 2789, 'shape': 3980, 'propos': 3503, 'inexplic': 2329, 'miracul': 2902, 'phone': 3316, 'driver': 1416, 'deni': 1251, 'steve': 4250, 'leg': 2615, 'ward': 4832, 'clair': 883, 'morn': 2957, 'fetch': 1734, 'mario': 2782, 'highest': 2157, 'roll': 3781, 'knee': 2539, 'rap': 3585, 'rope': 3796, 'hood': 2198, 'funnier': 1892, '1981': 37, 'youth': 4992, 'beard': 459, 'asian': 332, 'elvi': 1508, 'seller': 3933, 'sat': 3858, 'pink': 3334, 'muppet': 2984, 'conceiv': 991, 'flop': 1794, 'blew': 546, '1983': 38, 'ahead': 182, 'rick': 3744, 'warrior': 4840, 'witch': 4927, 'obtain': 3125, 'possess': 3401, 'luckili': 2721, 'chest': 833, 'candi': 718, 'batman': 451, 'knock': 2544, '35': 76, 'joker': 2470, 'freez': 1862, 'alfr': 204, 'corni': 1066, 'miik': 2876, 'divorc': 1363, 'rider': 3748, 'meanwhil': 2827, 'boot': 581, 'item': 2415, 'tokyo': 4535, 'isol': 2409, 'christoph': 863, 'leo': 2627, 'jeremi': 2445, 'devast': 1288, 'youngest': 4991, '12': 7, 'convers': 1047, 'boil': 568, 'sincer': 4052, 'tip': 4524, 'hal': 2071, 'hartley': 2106, 'styliz': 4313, 'ironi': 2403, 'bitter': 528, 'luci': 2717, 'traumat': 4591, 'detract': 1287, 'pseudo': 3513, 'attach': 360, 'psycho': 3517, 'proce': 3472, 'angst': 261, 'lucki': 2720, 'uk': 4647, 'mad': 2741, 'sheriff': 3994, 'presum': 3442, 'traci': 4568, 'pacino': 3211, 'discuss': 1335, 'beatti': 464, 'hoffman': 2176, 'eccentr': 1466, 'dicken': 1301, '1984': 39, 'halloween': 2075, 'scroog': 3902, 'bob': 564, 'closer': 914, 'godzilla': 1979, 'turtl': 4632, 'shadow': 3972, 'samurai': 3849, 'saga': 3838, 'flame': 1775, 'teeth': 4447, 'shell': 3991, 'feet': 1722, 'nuclear': 3107, 'awaken': 390, 'rampag': 3575, 'psychic': 3516, 'jewish': 2454, 'z': 4993, 'pole': 3377, 'fest': 1732, 'lewi': 2639, 'swallow': 4384, 'wolf': 4933, 'grey': 2031, 'spread': 4206, 'claustrophob': 892, 'afford': 167, '100': 5, 'rot': 3801, 'countrysid': 1081, 'lion': 2666, 'preserv': 3437, 'simultan': 4048, 'relief': 3665, 'oppos': 3164, 'divin': 1362, 'inferior': 2332, 'unless': 4697, 'campi': 713, 'hopelessli': 2204, 'mar': 2774, 'heroin': 2148, 'cure': 1144, 'instrument': 2367, 'contemporari': 1031, 'jazz': 2437, 'sole': 4125, 'complaint': 977, 'carmen': 747, 'scandal': 3868, 'bird': 521, 'hammer': 2080, 'shi': 3995, 'nostalg': 3093, 'oliv': 3151, 'blue': 557, 'shirley': 4000, 'concert': 995, '1972': 29, 'illeg': 2269, 'motorcycl': 2965, 'crawl': 1100, 'reput': 3695, '18': 14, 'june': 2490, 'princess': 3461, 'pg': 3307, 'cabin': 695, 'mountain': 2967, 'juli': 2486, 'greg': 2028, 'nearbi': 3022, 'startl': 4234, 'cell': 779, 'distant': 1351, 'neighbor': 3034, 'alcohol': 198, 'conduct': 1000, 'plight': 3360, 'properli': 3498, 'embrac': 1513, 'constant': 1023, 'detach': 1283, 'grab': 2001, 'string': 4290, 'uncov': 4660, 'defend': 1229, 'plagu': 3343, 'mission': 2912, 'celluloid': 780, 'astonish': 351, 'fatal': 1705, 'khan': 2521, 'proper': 3497, 'unrealist': 4705, 'kapoor': 2501, 'heel': 2132, 'spiral': 4187, 'exposit': 1648, 'unbear': 4655, 'web': 4864, 'charg': 812, 'upset': 4718, 'franki': 1853, 'satan': 3859, 'jesu': 2450, 'asham': 330, 'disagre': 1325, 'carel': 739, 'burn': 678, 'newspap': 3057, 'restrain': 3714, 'broad': 640, 'incident': 2304, 'ish': 2407, 'molli': 2933, 'aussi': 375, 'exposur': 1649, 'owner': 3206, 'dump': 1435, 'regret': 3648, 'muslim': 2992, 'honestli': 2195, 'equip': 1568, 'underground': 4664, 'experiment': 1638, 'crook': 1124, 'polanski': 3376, 'tenant': 4454, 'drove': 1419, 'frighten': 1872, 'terror': 4467, 'terri': 4462, 'gilliam': 1952, 'bruce': 653, 'willi': 4911, 'pitt': 3340, 'twelv': 4634, 'convincingli': 1052, 'trilog': 4609, 'mel': 2837, 'brook': 647, 'wing': 4918, 'stare': 4230, 'stoog': 4268, 'mexican': 2863, 'rita': 3757, 'bell': 483, 'moe': 2931, 'howard': 2225, 'curli': 1147, 'counterpart': 1078, 'gag': 1905, 'endear': 1532, 'settl': 3962, '25': 70, 'rob': 3766, 'al': 191, 'larri': 2579, '2nd': 72, 'editor': 1476, 'choke': 848, 'randomli': 3580, 'horrid': 2210, 'inan': 2301, 'watcher': 4846, 'unwatch': 4712, 'spite': 4191, 'tacki': 4404, 'drunk': 1423, 'silent': 4036, 'buster': 686, 'st': 4212, '1995': 49, 'paid': 3216, 'afterward': 173, 'outcom': 3183, 'observ': 3122, 'fed': 1719, 'spoke': 4198, 'ethan': 1587, 'improvis': 2297, 'bravo': 613, 'accus': 128, 'spi': 4178, 'bedroom': 469, 'teas': 4437, 'driven': 1415, 'network': 3047, 'cancel': 716, 'midnight': 2873, 'showcas': 4017, 'billi': 518, 'zane': 4994, 'sean': 3906, 'jare': 2433, 'hunt': 2244, 'gate': 1921, 'cher': 831, 'nichola': 3060, 'cage': 697, 'firstli': 1765, 'cring': 1118, 'acquaint': 133, 'frankli': 1854, 'prime': 3458, 'exot': 1631, 'phoni': 3317, 'borrow': 587, 'cb': 775, 'pioneer': 3335, 'spice': 4179, 'format': 1829, 'versu': 4764, 'program': 3485, 'cup': 1143, 'tea': 4432, 'drip': 1412, 'label': 2561, 'beverli': 504, 'bradi': 606, 'buzz': 692, 'stood': 4267, 'ticket': 4511, 'playboy': 3353, 'incompet': 2308, 'quarter': 3544, 'stinker': 4260, '1977': 33, 'virginia': 4790, 'tech': 4438, 'enthral': 1556, 'hat': 2108, 'rubi': 3814, 'eleg': 1495, 'sensit': 3942, 'el': 1489, 'flip': 1789, 'saturday': 3862, 'brosnan': 649, 'scottish': 3893, 'kick': 2522, 'nowaday': 3104, 'viru': 4793, 'disabl': 1324, 'equival': 1569, 'click': 900, 'disjoint': 1341, 'corps': 1068, 'incomprehens': 2309, 'destin': 1279, 'sake': 3842, 'uninterest': 4690, 'k': 2498, 'septemb': 3949, 'montag': 2943, 'stiller': 4256, 'owen': 3204, 'clip': 910, 'correct': 1069, 'seagal': 3904, 'audio': 371, 'own': 3205, '1968': 25, 'anthoni': 273, 'aristocrat': 311, 'revolut': 3733, 'ace': 129, 'mitch': 2916, 'repetit': 3685, 'paramount': 3237, 'wont': 4937, 'crucial': 1129, 'alan': 193, 'bate': 448, 'canadian': 715, 'rehash': 3650, 'sun': 4347, 'dynam': 1445, 'winchest': 4915, '73': 89, 'rifl': 3750, 'hudson': 2229, 'stewart': 4252, 'cope': 1059, 'julia': 2487, 'persona': 3298, 'unsatisfi': 4707, 'flawless': 1782, 'downey': 1389, 'jr': 2480, 'fallen': 1681, 'duval': 1441, '1940': 21, 'miscast': 2904, 'casino': 758, 'defens': 1230, 'chip': 846, 'shoulder': 4013, 'chew': 834, 'er': 1570, 'hip': 2165, 'lifestyl': 2648, 'consciou': 1015, 'jade': 2422, 'gambl': 1909, 'prequel': 3434, 'misguid': 2908, 'twin': 4638, 'imit': 2278, 'disbelief': 1330, 'toy': 4566, 'nostalgia': 3094, '19': 15, 'antonioni': 279, 'itali': 2413, 'sweep': 4388, 'sixti': 4067, 'bound': 595, 'paus': 3271, 'lesser': 2633, 'broke': 643, 'bsg': 656, 'stargat': 4231, 'atlanti': 356, 'spain': 4160, 'advic': 161, 'meander': 2823, 'taylor': 4431, 'roman': 3783, 'declin': 1218, 'jess': 2448, 'franco': 1850, 'egg': 1483, 'aliv': 210, 'survivor': 4375, 'coach': 925, 'walter': 4825, 'eva': 1593, 'trashi': 4589, 'poe': 3365, 'damag': 1167, 'melt': 2842, 'struck': 4298, 'trick': 4606, 'hyde': 2249, 'impli': 2288, 'che': 822, 'skip': 4073, 'enemi': 1535, 'overlook': 3200, 'key': 2520, 'primit': 3459, 'hyster': 2252, 'factori': 1669, 'archiv': 304, 'vader': 4734, 'preach': 3418, 'tool': 4548, 'wield': 4904, 'uniform': 4685, 'eighti': 1486, 'radic': 3564, 'reed': 3637, 'ham': 2077, 'mobster': 2923, 'elimin': 1501, 'rub': 3811, 'valentin': 4737, 'armstrong': 314, 'madonna': 2743, 'sooner': 4141, 'judi': 2484, 'andrew': 255, 'off': 3136, 'carol': 748, 'secondari': 3911, 'lighter': 2652, '3000': 75, 'bollywood': 571, 'prejudic': 3428, 'sloppi': 4096, 'moor': 2951, 'seal': 3905, 'beg': 472, 'happili': 2092, 'meaning': 2824, 'healthi': 2119, 'unawar': 4654, 'bye': 693, 'court': 1086, 'floor': 1793, 'clue': 921, 'sandler': 3852, 'cruis': 1133, 'passeng': 3256, 'proof': 3494, 'stale': 4219, 'arthur': 323, 'roy': 3809, 'cassidi': 761, '1969': 26, 'innov': 2350, 'transit': 4583, 'replac': 3686, 'camcord': 705, 'emphas': 1520, 'dirt': 1322, 'rout': 3806, '1933': 18, 'forth': 1832, 'dismal': 1343, '1973': 30, 'region': 3646, 'discern': 1332, 'downright': 1391, 'repuls': 3694, 'lauren': 2592, 'sitcom': 4063, 'h': 2063, 'korean': 2550, 'carpent': 749, 'jami': 2427, 'curti': 1151, 'cross': 1125, 'relentless': 3661, 'psychopath': 3519, 'myer': 2997, 'fifteen': 1741, 'packag': 3213, 'moodi': 2949, 'maniac': 2767, 'bump': 674, 'eas': 1456, 'pet': 3303, 'modesti': 2930, 'travesti': 4593, 'option': 3167, '22': 68, 'fuller': 1884, 'resist': 3702, 'sky': 4076, 'roof': 3790, 'mine': 2892, 'pal': 3221, 'subplot': 4318, 'ian': 2253, 'griffith': 2033, 'prevent': 3447, 'abound': 104, 'racist': 3562, 'hulk': 2234, 'capot': 730, 'princip': 3462, 'blake': 535, 'punish': 3527, 'superbl': 4354, 'schlock': 3880, 'analysi': 248, 'ration': 3593, 'coaster': 927, 'drink': 1411, 'foil': 1804, 'adopt': 154, 'sucker': 4330, 'ami': 241, 'buff': 664, 'historian': 2170, 'captain': 731, 'tens': 4458, 'realis': 3607, 'speech': 4173, 'percept': 3287, 'revolv': 3735, 'ingeni': 2337, 'drone': 1417, 'nanci': 3007, 'disastr': 1329, 'parad': 3234, 'pile': 3331, 'colin': 936, 'farrel': 1697, 'fairi': 1675, 'freak': 1856, 'choppi': 851, 'cash': 757, '000': 2, 'bumbl': 673, 'idiot': 2262, 'preston': 3441, 'advantag': 158, 'eleven': 1499, '17': 13, 'romp': 3788, 'slide': 4090, 'provok': 3512, 'blade': 532, 'suitabl': 4341, 'unsettl': 4709, 'synopsi': 4399, 'link': 2665, 'www': 4974, 'uplift': 4715, 'anger': 258, 'invas': 2391, 'southern': 4154, 'jeff': 2441, 'ariel': 309, 'diari': 1299, 'ah': 181, 'fx': 1899, 'static': 4237, 'fifti': 1743, 'tyler': 4641, 'altman': 226, 'franci': 1848, 'ford': 1818, 'scorses': 3890, 'ash': 329, 'cagney': 698, 'conscious': 1016, 'preced': 3420, 'notabl': 3095, 'rod': 3778, 'antholog': 272, 'logan': 2685, 'crisi': 1120, 'stanley': 4227, 'lit': 2671, 'carl': 743, 'sneak': 4113, 'oddli': 3135, 'ambigu': 236, 'dilemma': 1310, 'randolph': 3578, 'compens': 973, 'berlin': 495, 'basket': 444, 'tap': 4419, 'gadget': 1904, '1989': 44, 'unsuspect': 4710, 'lawyer': 2597, 'firm': 1762, 'deceas': 1212, 'client': 901, 'funer': 1890, 'minist': 2897, 'occupi': 3131, 'assist': 345, 'outer': 3185, 'hollow': 2183, 'cruelti': 1132, 'darker': 1184, 'canada': 714, 'attenborough': 363, 'kurt': 2556, 'russel': 3824, 'cameron': 709, 'crow': 1126, 'distinguish': 1353, 'taxi': 4430, 'blair': 534, 'boyfriend': 603, 'doll': 1370, 'bread': 615, 'fright': 1871, 'loyal': 2714, 'campbel': 712, 'dame': 1168, 'inappropri': 2302, 'treasur': 4594, 'patienc': 3262, 'restor': 3713, 'mighti': 2875, 'reviv': 3731, 'rapidli': 3587, 'garbo': 1916, 'swedish': 4387, 'en': 1526, 'gestur': 1943, 'asset': 343, 'foolish': 1812, 'adequ': 149, 'martha': 2790, 'daniel': 1179, 'span': 4161, 'choreographi': 853, 'liu': 2676, 'skull': 4075, 'pin': 3333, 'garner': 1919, 'harrison': 2103, 'tank': 4418, 'glimps': 1967, 'unforgett': 4681, 'todd': 4530, 'sheet': 3989, 'spoof': 4201, 'nuanc': 3106, 'soup': 4151, 'brush': 654, 'undead': 4662, 'marin': 2781, 'bastard': 446, 'forti': 1833, 'length': 2625, 'tax': 4429, 'palanc': 3223, 'comedian': 955, 'reaction': 3602, 'shift': 3997, 'habit': 2065, 'sugar': 4337, 'grandmoth': 2010, 'virgin': 4789, 'matt': 2808, 'shelley': 3992, 'dictat': 1302, 'tourist': 4562, 'lavish': 2594, 'technicolor': 4440, 'spectacl': 4171, 'braveheart': 612, 'oppress': 3166, 'impos': 2292, 'undeni': 4663, 'lang': 2575, 'wardrob': 4833, 'tim': 4517, 'cox': 1091, 'calm': 704, 'mock': 2924, 'scotland': 3891, 'visitor': 4797, 'perman': 3294, 'heartfelt': 2125, 'luxuri': 2733, 'businessman': 684, 'philip': 3312, 'museum': 2989, 'ocean': 3133, 'sea': 3903, 'triangl': 4603, 'suppli': 4360, 'stark': 4232, 'worker': 4945, 'interior': 2379, 'pride': 3454, 'alright': 221, 'lemmon': 2622, 'orient': 3172, '1979': 35, 'ladder': 2565, 'femm': 1730, 'belli': 484, 'toilet': 4533, 'poker': 3375, 'critiqu': 1122, 'bargain': 432, 'rude': 3815, 'cant': 724, 'chuckl': 866, 'bless': 545, 'puppi': 3530, 'btw': 657, 'wilder': 4908, 'insipid': 2355, 'resourc': 3707, 'profan': 3478, 'victor': 4773, 'mclaglen': 2819, 'nolan': 3079, 'betray': 500, 'beatl': 463, 'overact': 3194, 'turner': 4631, 'tour': 4561, '1939': 20, 'legal': 2617, 'horn': 2207, 'stylish': 4312, 'birthday': 523, 'grate': 2014, 'cuba': 1136, 'tower': 4564, 'gentl': 1935, 'outing': 3187, 'shark': 3982, 'passabl': 3254, 'beneath': 491, 'varieti': 4750, 'sleazi': 4085, 'outfit': 3186, 'vanc': 4744, 'eugen': 1589, 'helen': 2136, 'talki': 4415, 'luka': 2725, 'illog': 2270, 'sentinel': 3947, 'graini': 2007, 'alison': 209, 'angel': 256, 'goldblum': 1984, 'guard': 2050, 'snl': 4115, 'newman': 3055, 'fart': 1698, 'juvenil': 2497, 'tube': 4625, 'fri': 1866, 'grasp': 2013, 'absenc': 107, 'oper': 3159, 'cheat': 824, 'abrupt': 106, 'mansion': 2772, 'backdrop': 406, 'relev': 3662, 'plod': 3361, 'notion': 3100, 'safeti': 3837, 'biographi': 520, 'map': 2773, 'prostitut': 3504, 'mildr': 2880, 'dish': 1340, 'venom': 4758, 'astound': 352, 'nolt': 3080, 'taught': 4428, 'suprem': 4364, 'bat': 447, 'excess': 1620, 'henc': 2143, 'spark': 4164, 'coup': 1082, 'franc': 1846, 'coloni': 944, 'peer': 3279, 'cynic': 1159, 'pronounc': 3493, 'geni': 1932, 'spider': 4180, 'bow': 597, 'airplan': 186, 'remad': 3669, 'dig': 1307, 'barrel': 434, 'werewolf': 4876, 'coat': 928, 'wed': 4866, 'silver': 4038, 'immigr': 2283, '1920': 16, 'pervers': 3301, 'akin': 189, 'contempl': 1030, 'latin': 2584, 'omen': 3154, 'mafia': 2744, 'thief': 4482, 'diamond': 1296, 'heist': 2134, 'tommi': 4540, 'dose': 1384, 'deed': 1222, 'gear': 1925, 'press': 3439, 'disc': 1331, 'trait': 4579, 'scientist': 3885, 'decapit': 1211, 'crippl': 1119, 'closet': 916, 'uncut': 4661, 'abc': 98, 'carter': 753, 'rat': 3590, 'spree': 4207, 'bag': 414, 'urg': 4720, 'da': 1161, 'correctli': 1070, 'surf': 4366, 'doc': 1365, 'measur': 2828, 'agenc': 175, 'insur': 2369, 'employe': 1524, 'basing': 443, 'del': 1236, '1976': 32, 'fragil': 1844, 'apolog': 289, 'judgment': 2483, 'prom': 3489, 'april': 301, 'overdon': 3197, 'tag': 4408, 'prop': 3495, 'edgi': 1473, 'slash': 4080, 'id': 2256, 'price': 3452, 'benefit': 492, 'vulner': 4811, 'goofi': 1991, 'preachi': 3419, 'frog': 1873, 'repeatedli': 3684, 'athlet': 355, 'wholli': 4895, 'assert': 342, 'resurrect': 3718, 'butcher': 687, 'charlton': 818, 'heston': 2150, 'atroc': 358, 'charl': 815, 'lester': 2635, 'recognit': 3623, 'argument': 308, 'mystic': 2999, 'interpret': 2382, 'evolut': 1610, 'compass': 971, 'taboo': 4402, 'sore': 4144, 'religi': 3666, 'mutual': 2996, 'bin': 519, 'cia': 868, 'henri': 2144, 'fonda': 1808, 'cain': 699, 'skit': 4074, 'pregnant': 3427, 'strain': 4277, 'circu': 876, 'robinson': 3772, 'inhabit': 2340, 'behold': 479, 'california': 702, 'brown': 652, 'colleagu': 939, 'fulfil': 1882, 'dwarf': 1443, 'servant': 3958, 'leather': 2609, 'explod': 1643, 'randi': 3577, 'fulci': 1881, 'awak': 389, 'hooker': 2200, 'idol': 2263, 'acquir': 134, 'meyer': 2865, 'stallon': 4222, 'reserv': 3700, 'snow': 4116, 'storm': 4272, 'robber': 3767, 'retriev': 3722, 'environ': 1563, 'valley': 4739, 'forest': 1820, 'econom': 1468, 'punk': 3528, 'breath': 620, 'bitch': 525, 'elabor': 1490, 'elm': 1505, 'cyborg': 1157, 'splatter': 4192, 'fever': 1735, 'pitch': 3338, 'sox': 4156, 'sport': 4203, 'greedi': 2024, 'swing': 4392, 'laurel': 2591, 'baldwin': 420, 'sleepwalk': 4087, 'deliveri': 1244, 'bash': 440, 'implaus': 2287, 'fog': 1803, 'encourag': 1530, 'meryl': 2854, 'streep': 4282, 'selfish': 3931, 'deliber': 1238, 'godfath': 1978, 'constitut': 1025, 'ran': 3576, 'centr': 785, 'monk': 2938, 'built': 668, 'steel': 4244, 'repli': 3688, 'dust': 1438, 'belushi': 488, 'stuart': 4301, 'attorney': 367, 'bath': 449, 'yawn': 4978, 'flair': 1774, 'boyl': 604, 'widow': 4903, 'ann': 263, 'reid': 3652, 'craig': 1094, 'compromis': 987, 'inject': 2344, 'alarm': 194, 'albeit': 195, 'phrase': 3321, 'fluff': 1798, '21st': 67, 'hack': 2066, 'chees': 828, 'falk': 1679, 'hank': 2088, 'murray': 2987, 'guid': 2053, 'collector': 941, 'interrupt': 2383, 'andi': 253, '1978': 34, 'phantom': 3308, 'norm': 3087, 'pound': 3408, 'loser': 2699, 'blah': 533, 'turd': 4628, 'bounc': 594, 'rhythm': 3740, 'antonio': 278, '2008': 65, 'goof': 1990, 'shield': 3996, 'lisa': 2668, 'kidnap': 2526, 'caught': 772, 'jacket': 2419, 'clinic': 908, 'mexico': 2864, 'margaret': 2777, 'toronto': 4555, 'myth': 3000, 'nerd': 3043, 'geek': 1926, 'worship': 4951, 'awhil': 396, 'scientif': 3884, 'london': 2689, 'darren': 1186, 'mum': 2981, 'legitim': 2620, 'frontier': 1876, 'solv': 4129, 'ebert': 1465, 'info': 2335, 'spinal': 4185, 'album': 197, 'reduc': 3635, 'caricatur': 742, 'spontan': 4200, 'hop': 2201, 'biker': 514, 'bike': 513, 'ireland': 2400, 'decis': 1216, 'crocodil': 1123, 'rooney': 3794, 'foul': 1838, 'smell': 4105, 'baker': 417, 'conneri': 1010, 'boredom': 584, 'slam': 4077, 'stack': 4214, 'complain': 976, 'cemeteri': 781, 'sunshin': 4351, 'glow': 1974, 'mason': 2797, 'miller': 2885, 'iv': 2416, 'mummi': 2982, 'basement': 439, 'fund': 1888, 'cigarett': 869, 'throat': 4501, 'graduat': 2005, 'comb': 949, 'rocki': 3777, 'montana': 2944, 'painter': 3219, 'albert': 196, 'norman': 3089, 'reign': 3653, 'brillianc': 632, 'pocket': 3364, 'susan': 4376, 'prank': 3415, 'aveng': 384, 'ned': 3029, 'nineti': 3071, 'ambiti': 238, 'atroci': 359, 'tactic': 4406, 'rapist': 3588, 'gori': 1995, 'wtf': 4971, 'neatli': 3025, 'conspiraci': 1022, 'theori': 4479, 'tunnel': 4627, 'ron': 3789, 'grinch': 2036, 'lou': 2703, 'carrey': 751, 'heap': 2120, 'carey': 740, 'metal': 2860, 'perceiv': 3286, 'convert': 1048, 'gloriou': 1972, 'pattern': 3267, 'photo': 3318, 'bake': 416, 'reson': 3705, 'risk': 3756, 'loretta': 2697, 'victori': 4774, 'restaur': 3712, 'cinderella': 870, 'christi': 859, 'vaniti': 4747, 'curtain': 1150, 'donna': 1378, '2004': 61, '1991': 46, '2002': 59, 'rumor': 3819, 'repris': 3693, 'calib': 701, 'characterist': 811, 'buri': 677, 'recit': 3620, 'belt': 487, 'scar': 3869, 'stress': 4286, 'immatur': 2279, 'rhyme': 3739, 'unorigin': 4700, 'worm': 4947, 'loneli': 2691, 'niro': 3073, 'nightclub': 3067, 'deriv': 1265, 'wanna': 4828, 'homag': 2187, 'tarantino': 4422, 'furthermor': 1896, 'pen': 3280, 'characteris': 810, 'arguabl': 307, 'amateur': 231, 'horrifi': 2212, 'slight': 4091, 'sh': 3970, 'olli': 3153, 'milk': 2883, 'bacon': 409, 'anderson': 252, 'colonel': 943, 'neill': 3037, 'renaiss': 3678, 'galaxi': 1908, 'jackson': 2421, 'yeti': 4987, 'tame': 4417, 'improb': 2295, 'toss': 4557, 'verhoeven': 4762, 'gerard': 1939, 'lure': 2729, 'christin': 861, 'sicken': 4025, 'hitchcock': 2172, 'wive': 4931, '1999': 52, 'akshay': 190, 'kumar': 2553, 'amitabh': 242, 'hilar': 2161, 'roller': 3782, '99': 96, '16': 12, 'virtu': 4791, 'inher': 2341, 'breed': 622, 'speci': 4168, 'diseas': 1336, 'sirk': 4059, 'chicago': 835, 'ambit': 237, 'hostil': 2218, 'wave': 4849, 'stimul': 4258, 'weather': 4862, 'joel': 2463, 'agenda': 176, 'jay': 2436, 'bridget': 628, 'http': 2228, 'sustain': 4382, 'dylan': 1444, 'leigh': 2621, 'eastern': 1461, 'hung': 2242, 'decept': 1214, 'furiou': 1895, 'showdown': 4018, 'coher': 932, 'kitchen': 2536, 'brando': 610, 'contempt': 1032, 'electron': 1494, 'substanc': 4320, 'misfortun': 2907, 'cathol': 770, 'beaten': 462, 'shaki': 3975, 'scariest': 3874, 'enforc': 1538, 'timothi': 4522, 'nope': 3086, 'bourn': 596, 'damon': 1171, 'angela': 257, 'paula': 3269, 'neurot': 3048, 'endur': 1534, 'sympathi': 4398, 'deem': 1223, 'clan': 884, 'amor': 245, 'torn': 4554, 'knife': 2541, 'gillian': 1953, 'orson': 3176, 'shanghai': 3979, 'lab': 2560, 'literari': 2673, 'wrestl': 4963, 'palma': 3226, 'widescreen': 4901, 'policeman': 3379, 'pot': 3405, 'naughti': 3018, 'legaci': 2616, 'rome': 3786, 'unknown': 4695, 'bust': 685, 'lean': 2605, 'deaf': 1200, 'bias': 507, 'offend': 3137, 'cliffhang': 903, 'nerv': 3044, 'faster': 1703, 'rambo': 3573, 'perpetu': 3295, 'poison': 3372, 'robbin': 3769, 'financ': 1754, 'salman': 3845, 'jo': 2458, 'clara': 885, 'scarecrow': 3871, 'stolen': 4264, 'spill': 4183, 'pale': 3224, 'blob': 548, 'mcqueen': 2820, 'aborigin': 102, 'bay': 453, 'uwe': 4731, 'boll': 570, 'disgrac': 1337, 'spike': 4182, 'sleaz': 4084, 'neighborhood': 3035, 'undermin': 4666, 'imperson': 2286, 'ritchi': 3758, 'alex': 201, 'east': 1460, 'durat': 1437, 'verg': 4761, 'labor': 2562, 'casper': 759, 'boo': 577, 'tonight': 4546, 'vincent': 4786, 'hopeless': 2203, 'rubber': 3812, 'acid': 131, 'blur': 559, 'leon': 2628, 'weight': 4869, 'bullet': 671, '75': 90, 'palac': 3222, 'traffic': 4573, '1993': 47, 'alicia': 206, 'princ': 3460, 'dire': 1317, 'nathan': 3014, 'justin': 2496, 'chaplin': 806, 'inabl': 2298, 'mundan': 2983, 'sibl': 4023, 'dysfunct': 1446, 'roommat': 3793, 'karl': 2503, 'goer': 1981, 'fay': 1713, '1986': 41, 'ho': 2175, 'candl': 720, 'tad': 4407, 'stir': 4261, 'bent': 493, 'kingdom': 2533, 'barn': 433, 'hapless': 2089, 'victoria': 4775, 'persuad': 3300, 'screw': 3899, 'bug': 666, 'pixar': 3341, 'evolv': 1611, 'ant': 270, 'expand': 1632, 'numer': 3112, 'tooth': 4549, 'trivia': 4616, 'min': 2889, 'conrad': 1013, 'magician': 2748, 'macabr': 2736, 'morri': 2959, 'vibrant': 4769, 'resum': 3717, 'wealthi': 4858, 'nicol': 3063, 'marlon': 2786, 'niec': 3065, 'miniseri': 2896, 'eyr': 1660, 'dalton': 1166, 'rochest': 3774, 'beer': 470, 'lili': 2656, 'judd': 2481, 'artwork': 328, 'laura': 2590, 'unfair': 4679, 'vile': 4783, 'fuel': 1880, 'bubbl': 659, 'wrap': 4960, 'cooper': 1057, 'leonard': 2629, 'fascist': 1700, 'housewif': 2224, 'gabriel': 1903, 'contrari': 1039, 'glorifi': 1971, 'rambl': 3572, 'ol': 3147, 'slimi': 4094, 'wrestler': 4964, 'articl': 324, 'vet': 4765, 'ghetto': 1945, 'march': 2776, 'standout': 4226, 'penni': 3282, 'johnson': 2467, 'corman': 1063, 'subtli': 4325, 'modest': 2929, 'karloff': 2504, 'bori': 585, 'rukh': 3817, 'demonstr': 1250, 'felix': 1723, 'drake': 1398, 'guest': 2052, 'expedit': 1634, 'diana': 1298, 'secretli': 3915, 'heal': 2117, 'collabor': 937, 'refresh': 3642, 'farc': 1693, 'acknowledg': 132, 'cypher': 1160, 'dispos': 1348, 'natali': 3013, 'dont': 1379, 'mann': 2770, 'cuban': 1137, 'despic': 1276, 'soderbergh': 4121, 'revolt': 3732, 'compris': 986, 'chamberlain': 798, 'cg': 792, 'basebal': 438, 'empir': 1522, 'porter': 3394, 'millionair': 2887, 'pant': 3231, 'lucil': 2718, 'betti': 503, 'francisco': 1849, 'inherit': 2342, 'toe': 4531, 'trademark': 4571, 'hackney': 2068, 'gray': 2017, 'royal': 3810, 'nina': 3069, 'superhero': 4356, 'fundament': 1889, 'shepherd': 3993, 'basketbal': 445, 'viewpoint': 4781, 'cream': 1103, 'watson': 4848, 'chess': 832, 'barri': 435, 'bitten': 527, 'comprehens': 985, 'biko': 516, 'unexplain': 4678, 'proport': 3502, 'rid': 3745, 'dentist': 1254, 'gimmick': 1954, 'mobil': 2922, 'dreari': 1405, 'vomit': 4805, 'retard': 3720, 'shatter': 3985, 'spielberg': 4181, 'temper': 4450, 'outlin': 3189, 'rooki': 3791, 'quaid': 3541, 'rendit': 3680, 'widmark': 4902, 'ritter': 3759, 'monoton': 2941, 'overlong': 3199, 'irrelev': 2404, 'conan': 990, 'graham': 2006, 'sue': 4333, 'jedi': 2440, 'tripl': 4613, 'kazan': 2508, 'predat': 3423, 'orlean': 3174, 'lengthi': 2626, 'scenario': 3875, 'melissa': 2838, 'hart': 2105, 'sabrina': 3830, 'zizek': 4996, 'bro': 639, 'attribut': 369, 'novak': 3102, 'kyle': 2557, 'pervert': 3302, 'useless': 4724, 'embark': 1511, 'austin': 377, 'cape': 728, 'wipe': 4921, 'breakfast': 618, 'brit': 636, 'margin': 2778, 'sparkl': 4165, 'mute': 2995, 'evan': 1594, 'derang': 1263, 'meal': 2821, 'boxer': 600, 'dixon': 1364, 'sidewalk': 4029, 'poster': 3404, 'shove': 4015, 'repress': 3692, 'owe': 3203, 'arrog': 320, 'oblig': 3118, 'what': 4882, 'sadist': 3834, 'emperor': 1519, 'damm': 1169, 'champion': 799, 'potter': 3407, 'stalker': 4221, 'sand': 3851, 'visibl': 4794, 'catchi': 767, 'intric': 2386, 'salt': 3846, 'fontain': 1809, 'abraham': 105, 'tribut': 4605, 'chop': 850, 'lincoln': 2658, 'flynn': 1800, 'duti': 1440, '1971': 28, 'salli': 3844, 'dreck': 1406, 'palm': 3225, 'louis': 2706, 'hello': 2139, 'nervou': 3045, 'shred': 4021, 'arc': 303, 'grief': 2032, 'feat': 1717, 'rotten': 3803, 'blackmail': 531, 'porno': 3393, 'uh': 4646, 'pour': 3409, 'bonu': 576, 'simplic': 4045, 'victorian': 4776, 'spade': 4159, 'keen': 2510, 'util': 4728, 'scoop': 3887, 'ellen': 1504, 'huh': 2233, 'farm': 1695, 'seedi': 3922, 'cooki': 1055, 'retain': 3719, 'pauli': 3270, 'bett': 501, 'corbett': 1061, 'errol': 1574, 'sidney': 4030, 'cannon': 722, 'pokemon': 3374, 'ramon': 3574, 'jew': 2452, 'staff': 4215, 'ustinov': 4726, 'nod': 3076, 'bunni': 676, 'seed': 3921, 'infect': 2331, 'feminist': 1729, 'mermaid': 2853, 'mesmer': 2855, 'gundam': 2059, 'slug': 4099, 'lesli': 2631, 'harmless': 2100, 'olivi': 3152, 'restrict': 3715, 'trauma': 4590, 'lumet': 2727, 'exhaust': 1627, 'succe': 4326, 'pacif': 3210, 'monti': 2946, 'li': 2641, 'soccer': 4118, 'iran': 2398, 'schedul': 3878, 'hallucin': 2076, 'psychot': 3520, 'whack': 4880, 'goldsworthi': 1986, 'strongest': 4296, 'alvin': 228, 'hackman': 2067, 'macho': 2739, 'hamlet': 2079, 'julian': 2488, 'implic': 2289, 'recognis': 3622, 'down': 1388, 'chavez': 821, 'elit': 1502, 'newer': 3053, 'lifeless': 2647, 'dracula': 1394, 'stole': 4263, 'institut': 2365, 'disord': 1346, 'facil': 1666, 'matthau': 2810, 'cattl': 771, 'kirk': 2534, 'springer': 4209, 'marc': 2775, 'gina': 1955, 'sniper': 4114, 'phenomen': 3309, 'couch': 1074, 'zoom': 4999, 'esther': 1582, 'malon': 2762, 'cousin': 1087, 'elizabeth': 1503, 'wisdom': 4923, 'marti': 2791, 'frantic': 1855, 'monument': 2947, 'dorothi': 1383, 'hara': 2093, 'q': 3540, 'unpredict': 4702, 'underneath': 4667, 'shortcom': 4009, 'mortal': 2960, 'muscl': 2988, 'kent': 2517, 'wore': 4943, 'carla': 744, 'newli': 3054, 'vastli': 4753, 'buffalo': 665, 'maci': 2740, 'slightest': 4092, 'frontal': 1875, 'inmat': 2347, 'profess': 3479, 'scarfac': 3872, 'conscienc': 1014, 'psych': 3514, 'pickford': 3325, 'ross': 3800, 'joey': 2464, 'foxx': 1843, 'enterpris': 1554, 'absent': 108, 'jill': 2455, 'axe': 398, 'rosario': 3797, 'dawson': 1194, 'kurosawa': 2555, 'mol': 2932, 'summar': 4344, 'recogniz': 3624, 'globe': 1969, 'smack': 4100, 'milo': 2888, 'bacal': 402, 'bergman': 494, 'barrymor': 436, 'keith': 2512, 'gasp': 1920, 'signal': 4033, 'propheci': 3500, 'helm': 2140, 'crawford': 1099, 'nun': 3113, 'spray': 4205, 'tomorrow': 4541, 'peril': 3292, 'publish': 3522, 'pulp': 3524, 'clone': 912, 'vanessa': 4745, 'roar': 3765, 'hopkin': 2205, 'uma': 4650, '1996': 50, 'household': 2223, 'sid': 4026, 'bronson': 645, 'countless': 1079, 'dandi': 1176, 'anton': 277, 'carlito': 745, 'climat': 905, 'ratso': 3594, 'pete': 3304, 'mode': 2925, 'unseen': 4708, 'truman': 4622, 'marion': 2783, 'huston': 2248, 'cecil': 777, 'prey': 3451, 'holocaust': 2186, 'redund': 3636, 'hain': 2069, 'preming': 3430, 'patricia': 3264, 'lindsay': 2660, 'hostag': 2217, 'streisand': 4284, 'wendi': 4873, 'grotesqu': 2041, 'dolph': 1372, 'lundgren': 2728, 'greet': 2027, 'wwe': 4972, 'championship': 800, 'sandra': 3853, 'clad': 881, 'linger': 2664, 'flock': 1792, 'wealth': 4857, 'sergeant': 3952, 'entitl': 1560, 'shaw': 3986, 'filth': 1752, 'kolchak': 2548, 'frankenstein': 1852, 'linda': 2659, 'tripe': 4612, 'cush': 1153, 'ingrid': 2339, 'fido': 1739, 'gunga': 2060, 'din': 1314, 'macarthur': 2737, 'shorter': 4010, 'scoobi': 3886, 'doo': 1380, 'net': 3046, 'burton': 681, 'connor': 1011, 'nemesi': 3040, 'mathieu': 2806, 'dim': 1311, '1988': 43, 'kidman': 2525, 'paxton': 3272, 'kitti': 2537, 'replay': 3687, 'wax': 4850, 'stair': 4217, 'cartoonish': 755, 'mccoy': 2818, 'cap': 726, 'antagonist': 271, 'bach': 403, '95': 95, 'enabl': 1527, 'oldest': 3150, 'groan': 2039, 'tierney': 4513, 'earl': 1450, 'cohen': 931, 'rosemari': 3799, 'corn': 1064, 'poem': 3366, 'astronaut': 353, 'naschi': 3011, 'israel': 2411, 'hug': 2230, 'handicap': 2084, 'timberlak': 4518, 'walsh': 4823, 'timon': 4521, 'chicken': 837, 'gabl': 1902, 'evelyn': 1596, 'affleck': 166, 'isra': 2410, 'junior': 2492, 'nicola': 3064, 'bud': 661, 'senior': 3937, 'unleash': 4696, 'pearl': 3277, 'uniformli': 4686, 'trier': 4607, 'domino': 1375, 'vignett': 4782, 'gere': 1940, 'simmon': 4041, 'raj': 3569, 'dudley': 1429, 'europa': 1591, 'narrow': 3010, 'einstein': 1487, 'wang': 4827, 'thunderbird': 4509, 'boston': 589, 'danish': 1180, 'tara': 4421, 'spock': 4195, '00': 1, 'radiat': 3563, 'boyer': 602, 'miyazaki': 2920} ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-05-21 16:56:01 Starting - Starting the training job... 2020-05-21 16:56:03 Starting - Launching requested ML instances...... 2020-05-21 16:57:14 Starting - Preparing the instances for training... 2020-05-21 16:58:00 Downloading - Downloading input data... 2020-05-21 16:58:20 Training - Downloading the training image..Arguments: train [2020-05-21:16:58:40:INFO] Running standalone xgboost training. [2020-05-21:16:58:40:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8443.61mb [2020-05-21:16:58:40:INFO] Determined delimiter of CSV input is ',' [16:58:40] S3DistributionType set as FullyReplicated [16:58:42] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-05-21:16:58:42:INFO] Determined delimiter of CSV input is ',' [16:58:42] S3DistributionType set as FullyReplicated [16:58:43] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, 2020-05-21 16:58:39 Training - Training image download completed. Training in progress.[16:58:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.297533#011validation-error:0.3018 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [16:58:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [1]#011train-error:0.281467#011validation-error:0.2855 [16:58:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [2]#011train-error:0.270933#011validation-error:0.2798 [16:58:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.2614#011validation-error:0.2718 [16:58:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.258067#011validation-error:0.2663 [16:58:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.254267#011validation-error:0.2657 [16:58:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [6]#011train-error:0.2492#011validation-error:0.2574 [16:58:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.243#011validation-error:0.2554 [16:58:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [8]#011train-error:0.2304#011validation-error:0.2431 [16:58:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.223#011validation-error:0.2394 [16:59:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [10]#011train-error:0.2186#011validation-error:0.237 [16:59:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 10 pruned nodes, max_depth=5 [11]#011train-error:0.2146#011validation-error:0.2331 [16:59:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.209333#011validation-error:0.2278 [16:59:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [13]#011train-error:0.2066#011validation-error:0.2236 [16:59:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.205#011validation-error:0.2207 [16:59:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [15]#011train-error:0.205#011validation-error:0.2203 [16:59:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.201667#011validation-error:0.2196 [16:59:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [17]#011train-error:0.1984#011validation-error:0.2167 [16:59:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.193667#011validation-error:0.2124 [16:59:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [19]#011train-error:0.191533#011validation-error:0.2091 [16:59:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [20]#011train-error:0.188933#011validation-error:0.207 [16:59:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.1862#011validation-error:0.2057 [16:59:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.183#011validation-error:0.202 [16:59:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [23]#011train-error:0.1798#011validation-error:0.2003 [16:59:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [24]#011train-error:0.177667#011validation-error:0.1975 [16:59:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5 [25]#011train-error:0.176133#011validation-error:0.1959 [16:59:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [26]#011train-error:0.1746#011validation-error:0.1962 [16:59:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [27]#011train-error:0.171133#011validation-error:0.193 [16:59:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [28]#011train-error:0.168933#011validation-error:0.1906 [16:59:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [29]#011train-error:0.167733#011validation-error:0.1896 [16:59:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [30]#011train-error:0.1654#011validation-error:0.1863 [16:59:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.162#011validation-error:0.1856 [16:59:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [32]#011train-error:0.158067#011validation-error:0.1844 [16:59:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [33]#011train-error:0.156533#011validation-error:0.1842 [16:59:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.157067#011validation-error:0.1833 [16:59:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [35]#011train-error:0.156133#011validation-error:0.182 [16:59:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [36]#011train-error:0.156067#011validation-error:0.1815 [16:59:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [37]#011train-error:0.155133#011validation-error:0.1813 [16:59:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.1532#011validation-error:0.1806 [16:59:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [39]#011train-error:0.1516#011validation-error:0.1796 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .....................Arguments: serve [2020-05-21 17:05:10 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-21 17:05:10 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-21 17:05:10 +0000] [1] [INFO] Using worker: gevent [2020-05-21 17:05:10 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-21 17:05:10 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-21:17:05:10:INFO] Model loaded successfully for worker : 38 [2020-05-21 17:05:10 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-21:17:05:10:INFO] Model loaded successfully for worker : 39 [2020-05-21 17:05:10 +0000] [41] [INFO] Booting worker with pid: 41 [2020-05-21:17:05:10:INFO] Model loaded successfully for worker : 40 [2020-05-21:17:05:10:INFO] Model loaded successfully for worker : 41 2020-05-21T17:05:38.745:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-21:17:05:40:INFO] Sniff delimiter as ',' [2020-05-21:17:05:40:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:41:INFO] Sniff delimiter as ',' [2020-05-21:17:05:41:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:40:INFO] Sniff delimiter as ',' [2020-05-21:17:05:40:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:41:INFO] Sniff delimiter as ',' [2020-05-21:17:05:41:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:41:INFO] Sniff delimiter as ',' [2020-05-21:17:05:41:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:41:INFO] Sniff delimiter as ',' [2020-05-21:17:05:41:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:42:INFO] Sniff delimiter as ',' [2020-05-21:17:05:42:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:43:INFO] Sniff delimiter as ',' [2020-05-21:17:05:43:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:43:INFO] Sniff delimiter as ',' [2020-05-21:17:05:42:INFO] Sniff delimiter as ',' [2020-05-21:17:05:42:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:43:INFO] Sniff delimiter as ',' [2020-05-21:17:05:43:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:43:INFO] Sniff delimiter as ',' [2020-05-21:17:05:43:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:43:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:44:INFO] Sniff delimiter as ',' [2020-05-21:17:05:44:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:44:INFO] Sniff delimiter as ',' [2020-05-21:17:05:44:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:45:INFO] Sniff delimiter as ',' [2020-05-21:17:05:45:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:45:INFO] Sniff delimiter as ',' [2020-05-21:17:05:45:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:45:INFO] Sniff delimiter as ',' [2020-05-21:17:05:45:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:45:INFO] Sniff delimiter as ',' [2020-05-21:17:05:45:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:45:INFO] Sniff delimiter as ',' [2020-05-21:17:05:45:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:45:INFO] Sniff delimiter as ',' [2020-05-21:17:05:45:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:47:INFO] Sniff delimiter as ',' [2020-05-21:17:05:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:47:INFO] Sniff delimiter as ',' [2020-05-21:17:05:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:47:INFO] Sniff delimiter as ',' [2020-05-21:17:05:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:47:INFO] Sniff delimiter as ',' [2020-05-21:17:05:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:47:INFO] Sniff delimiter as ',' [2020-05-21:17:05:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:47:INFO] Sniff delimiter as ',' [2020-05-21:17:05:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:48:INFO] Sniff delimiter as ',' [2020-05-21:17:05:48:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:48:INFO] Sniff delimiter as ',' [2020-05-21:17:05:48:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:50:INFO] Sniff delimiter as ',' [2020-05-21:17:05:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:50:INFO] Sniff delimiter as ',' [2020-05-21:17:05:50:INFO] Sniff delimiter as ',' [2020-05-21:17:05:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:50:INFO] Sniff delimiter as ',' [2020-05-21:17:05:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:50:INFO] Sniff delimiter as ',' [2020-05-21:17:05:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:50:INFO] Sniff delimiter as ',' [2020-05-21:17:05:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:50:INFO] Sniff delimiter as ',' [2020-05-21:17:05:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:50:INFO] Sniff delimiter as ',' [2020-05-21:17:05:50:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:52:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:52:INFO] Sniff delimiter as ',' [2020-05-21:17:05:52:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:52:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:52:INFO] Sniff delimiter as ',' [2020-05-21:17:05:52:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:54:INFO] Sniff delimiter as ',' [2020-05-21:17:05:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:55:INFO] Sniff delimiter as ',' [2020-05-21:17:05:55:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:55:INFO] Sniff delimiter as ',' [2020-05-21:17:05:55:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:55:INFO] Sniff delimiter as ',' [2020-05-21:17:05:55:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:54:INFO] Sniff delimiter as ',' [2020-05-21:17:05:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:55:INFO] Sniff delimiter as ',' [2020-05-21:17:05:55:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:55:INFO] Sniff delimiter as ',' [2020-05-21:17:05:55:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:55:INFO] Sniff delimiter as ',' [2020-05-21:17:05:55:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:57:INFO] Sniff delimiter as ',' [2020-05-21:17:05:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:57:INFO] Sniff delimiter as ',' [2020-05-21:17:05:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:57:INFO] Sniff delimiter as ',' [2020-05-21:17:05:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:57:INFO] Sniff delimiter as ',' [2020-05-21:17:05:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:57:INFO] Sniff delimiter as ',' [2020-05-21:17:05:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:57:INFO] Sniff delimiter as ',' [2020-05-21:17:05:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:57:INFO] Sniff delimiter as ',' [2020-05-21:17:05:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:57:INFO] Sniff delimiter as ',' [2020-05-21:17:05:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:59:INFO] Sniff delimiter as ',' [2020-05-21:17:05:59:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:59:INFO] Sniff delimiter as ',' [2020-05-21:17:05:59:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:00:INFO] Sniff delimiter as ',' [2020-05-21:17:06:00:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:00:INFO] Sniff delimiter as ',' [2020-05-21:17:05:59:INFO] Sniff delimiter as ',' [2020-05-21:17:05:59:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:05:59:INFO] Sniff delimiter as ',' [2020-05-21:17:05:59:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:00:INFO] Sniff delimiter as ',' [2020-05-21:17:06:00:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:00:INFO] Sniff delimiter as ',' [2020-05-21:17:06:00:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:00:INFO] Determined delimiter of CSV input is ',' [34m[2020-05-21:17:06:02:INFO] Sniff delimiter as ',' [2020-05-21:17:06:02:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:02:INFO] Sniff delimiter as ',' [2020-05-21:17:06:02:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:02:INFO] Sniff delimiter as ',' [2020-05-21:17:06:02:INFO] Sniff delimiter as ',' [2020-05-21:17:06:02:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:02:INFO] Sniff delimiter as ',' [2020-05-21:17:06:02:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:02:INFO] Sniff delimiter as ',' [2020-05-21:17:06:02:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:02:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:02:INFO] Sniff delimiter as ',' [2020-05-21:17:06:02:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:02:INFO] Sniff delimiter as ',' [2020-05-21:17:06:02:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:04:INFO] Sniff delimiter as ',' [2020-05-21:17:06:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:04:INFO] Sniff delimiter as ',' [2020-05-21:17:06:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:04:INFO] Sniff delimiter as ',' [2020-05-21:17:06:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:17:06:04:INFO] Sniff delimiter as ',' [2020-05-21:17:06:04:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.0 KiB (2.3 MiB/s) with 1 file(s) remaining Completed 369.0 KiB/369.0 KiB (3.4 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-west-1-731892558299/xgboost-2020-05-21-17-01-45-753/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() # Tokenized data looks like this: print(new_X[100]) ###Output ['felt', 'great', 'joy', 'see', 'film', 'master', 'piec', 'convinc', 'portugues', 'cinema', 'becam', 'realli', 'good', 'see', 'best', 'portugues', 'actor', 'field'] ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) print(new_XV[100]) #Testing What Vocabulary Looks Like print(vocabulary) print(type(vocabulary)) cache_file_bow = "bow_features.pkl" # add path to this cache_data_bow = joblib.load(os.path.join(cache_dir, cache_file_bow)) vocabulary_loaded = cache_data_bow['vocabulary'] cache_data_bow = None print(vocabulary_loaded) print(type(vocabulary_loaded)) vocabulary_loaded = None ###Output <class 'dict'> ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ......................Arguments: serve [2020-05-21 18:04:17 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-21 18:04:17 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-21 18:04:17 +0000] [1] [INFO] Using worker: gevent [2020-05-21 18:04:17 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-21 18:04:17 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-21:18:04:17:INFO] Model loaded successfully for worker : 39 [2020-05-21 18:04:17 +0000] [41] [INFO] Booting worker with pid: 41 [2020-05-21:18:04:17:INFO] Model loaded successfully for worker : 40 [2020-05-21 18:04:17 +0000] [42] [INFO] Booting worker with pid: 42 [2020-05-21:18:04:17:INFO] Model loaded successfully for worker : 41 [2020-05-21:18:04:17:INFO] Model loaded successfully for worker : 42 [2020-05-21:18:04:44:INFO] Sniff delimiter as ',' [2020-05-21:18:04:44:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:44:INFO] Sniff delimiter as ',' [2020-05-21:18:04:44:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:44:INFO] Sniff delimiter as ',' [2020-05-21:18:04:44:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:44:INFO] Sniff delimiter as ',' [2020-05-21:18:04:44:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:44:INFO] Sniff delimiter as ',' [2020-05-21:18:04:44:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:44:INFO] Sniff delimiter as ',' [2020-05-21:18:04:44:INFO] Determined delimiter of CSV input is ',' 2020-05-21T18:04:41.981:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-21:18:04:45:INFO] Sniff delimiter as ',' [2020-05-21:18:04:45:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:45:INFO] Sniff delimiter as ',' [2020-05-21:18:04:45:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:47:INFO] Sniff delimiter as ',' [2020-05-21:18:04:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:47:INFO] Sniff delimiter as ',' [2020-05-21:18:04:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:47:INFO] Sniff delimiter as ',' [2020-05-21:18:04:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:47:INFO] Sniff delimiter as ',' [2020-05-21:18:04:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:47:INFO] Sniff delimiter as ',' [2020-05-21:18:04:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:47:INFO] Sniff delimiter as ',' [2020-05-21:18:04:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:47:INFO] Sniff delimiter as ',' [2020-05-21:18:04:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:47:INFO] Sniff delimiter as ',' [2020-05-21:18:04:47:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:51:INFO] Sniff delimiter as ',' [2020-05-21:18:04:51:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:51:INFO] Sniff delimiter as ',' [2020-05-21:18:04:51:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:52:INFO] Sniff delimiter as ',' [2020-05-21:18:04:52:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:52:INFO] Sniff delimiter as ',' [2020-05-21:18:04:52:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:52:INFO] Sniff delimiter as ',' [2020-05-21:18:04:52:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:52:INFO] Sniff delimiter as ',' [2020-05-21:18:04:52:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:52:INFO] Sniff delimiter as ',' [2020-05-21:18:04:52:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:52:INFO] Sniff delimiter as ',' [2020-05-21:18:04:52:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:54:INFO] Sniff delimiter as ',' [2020-05-21:18:04:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:54:INFO] Sniff delimiter as ',' [2020-05-21:18:04:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:54:INFO] Sniff delimiter as ',' [2020-05-21:18:04:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:54:INFO] Sniff delimiter as ',' [2020-05-21:18:04:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:54:INFO] Sniff delimiter as ',' [2020-05-21:18:04:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:54:INFO] Sniff delimiter as ',' [2020-05-21:18:04:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:54:INFO] Sniff delimiter as ',' [2020-05-21:18:04:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:54:INFO] Sniff delimiter as ',' [2020-05-21:18:04:54:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:56:INFO] Sniff delimiter as ',' [2020-05-21:18:04:56:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:56:INFO] Sniff delimiter as ',' [2020-05-21:18:04:56:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:56:INFO] Sniff delimiter as ',' [2020-05-21:18:04:56:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:56:INFO] Sniff delimiter as ',' [2020-05-21:18:04:56:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:57:INFO] Sniff delimiter as ',' [2020-05-21:18:04:57:INFO] Sniff delimiter as ',' [2020-05-21:18:04:57:INFO] Sniff delimiter as ',' [2020-05-21:18:04:57:INFO] Sniff delimiter as ',' [2020-05-21:18:04:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:57:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:59:INFO] Sniff delimiter as ',' [2020-05-21:18:04:59:INFO] Sniff delimiter as ',' [2020-05-21:18:04:59:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:59:INFO] Sniff delimiter as ',' [2020-05-21:18:04:59:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:59:INFO] Sniff delimiter as ',' [2020-05-21:18:04:59:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:59:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:59:INFO] Sniff delimiter as ',' [2020-05-21:18:04:59:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:04:59:INFO] Sniff delimiter as ',' [2020-05-21:18:04:59:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:01:INFO] Sniff delimiter as ',' [2020-05-21:18:05:01:INFO] Sniff delimiter as ',' [2020-05-21:18:05:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:01:INFO] Sniff delimiter as ',' [2020-05-21:18:05:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:01:INFO] Sniff delimiter as ',' [2020-05-21:18:05:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:01:INFO] Sniff delimiter as ',' [2020-05-21:18:05:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:01:INFO] Sniff delimiter as ',' [2020-05-21:18:05:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:01:INFO] Sniff delimiter as ',' [2020-05-21:18:05:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:01:INFO] Sniff delimiter as ',' [2020-05-21:18:05:01:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:04:INFO] Sniff delimiter as ',' [2020-05-21:18:05:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:04:INFO] Sniff delimiter as ',' [2020-05-21:18:05:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:04:INFO] Sniff delimiter as ',' [2020-05-21:18:05:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:04:INFO] Sniff delimiter as ',' [2020-05-21:18:05:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:04:INFO] Sniff delimiter as ',' [2020-05-21:18:05:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:04:INFO] Sniff delimiter as ',' [2020-05-21:18:05:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:04:INFO] Sniff delimiter as ',' [2020-05-21:18:05:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:04:INFO] Sniff delimiter as ',' [2020-05-21:18:05:04:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:06:INFO] Sniff delimiter as ',' [2020-05-21:18:05:06:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:06:INFO] Sniff delimiter as ',' [2020-05-21:18:05:06:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:06:INFO] Sniff delimiter as ',' [2020-05-21:18:05:06:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:06:INFO] Sniff delimiter as ',' [2020-05-21:18:05:06:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:06:INFO] Sniff delimiter as ',' [2020-05-21:18:05:06:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:06:INFO] Sniff delimiter as ',' [2020-05-21:18:05:06:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:06:INFO] Sniff delimiter as ',' [2020-05-21:18:05:06:INFO] Determined delimiter of CSV input is ',' [2020-05-21:18:05:06:INFO] Sniff delimiter as ',' [2020-05-21:18:05:06:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.1 KiB (1.8 MiB/s) with 1 file(s) remaining Completed 369.1 KiB/369.1 KiB (2.6 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-west-1-731892558299/xgboost-2020-05-21-18-01-02-739/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2020-05-21-16-56-01-110 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample( new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['polit', 'documentari', 'recent', 'vintag', 'call', 'fight', 'tri', 'examin', 'infam', 'militari', 'industri', 'complex', 'grip', 'nation', 'consid', 'polem', 'incis', 'make', 'case', 'complex', 'war', 'fiasco', 'current', 'involv', 'iraq', 'yet', 'far', 'famou', 'seri', 'film', 'name', 'made', 'world', 'war', 'two', 'hollywood', 'director', 'frank', 'capra', 'although', 'consid', 'documentari', 'oscar', 'categori', 'seri', 'seven', 'film', 'realli', 'truli', 'mere', 'agitprop', 'vein', 'leni', 'reifenst', 'triumph', 'scene', 'capra', 'recycl', 'purpos', 'said', 'fact', 'mean', 'vital', 'inform', 'subsequ', 'gener', 'world', 'war', 'two', 'documentari', 'bbc', 'laud', 'world', 'war', 'lack', 'mean', 'valu', 'primari', 'sourc', 'less', 'valuabl', 'skill', 'made', 'recent', 'purchas', 'use', 'dvd', 'discount', 'store', 'found', 'opportun', 'select', 'free', 'dvd', 'purchas', 'chose', 'goodtim', 'dvd', 'four', 'dvd', 'collect', 'seri', 'rare', 'someth', 'free', 'worth', 'invalu', 'extra', 'dvd', 'sound', 'qualiti', 'print', 'vari', 'film', 'provid', 'insight', 'mind', 'american', 'two', 'third', 'centuri', 'ago', 'racism', 'overt', 'mani', 'classic', 'warner', 'brother', 'pro', 'war', 'cartoon', 'era', 'noth', 'wrong', 'blatant', 'distort', 'fact', 'seven', 'film', 'produc', '1942', '1945', 'prelud', 'war', 'nazi', 'strike', 'divid', 'conquer', 'battl', 'britain', 'battl', 'russia', 'battl', 'china', 'war', 'come', 'america', 'overal', 'film', 'seri', 'well', 'worth', 'watch', 'obviou', 'reason', 'subtl', 'thing', 'reveal', 'use', 'plural', 'term', 'like', 'x', 'million', 'refer', 'dollar', 'rather', 'modern', 'singular', 'overus', 'graphic', 'whole', 'seri', 'japanes', 'sword', 'pierc', 'center', 'manchuria', 'yet', 'also', 'show', 'complex', 'tri', 'appli', 'past', 'standard', 'current', 'war', 'lesson', 'world', 'war', 'one', 'avoid', 'foreign', 'entangl', 'applic', 'world', 'war', 'two', 'whose', 'lesson', 'act', 'earli', 'dictatorship', 'applic', 'three', 'major', 'war', 'america', 'fought', 'sinc', 'korea', 'vietnam', 'iraq', 'fact', 'much', 'seri', 'teeter', 'uncertainti', 'time', 'made', 'underscor', 'histor', 'valu', 'today', 'inform', 'clog', 'time', 'may', 'help', 'sort', 'truth', 'lie', 'propaganda', 'today', 'least', 'realiz', 'first', 'tenuou', 'posit', 'last'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'21st', 'victorian', 'reincarn', 'spill', 'weari', 'ghetto', 'playboy'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'banana', 'sophi', 'orchestr', 'dubiou', 'omin', 'optimist', 'masterson'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. ###Code # Potential changes: 1, words are different, 2 word frequency i.e. certain words will have more impact than others b/c of frequency. # Frequency affected impact of those imput feature/nodes in the model ###Output _____no_output_____ ###Markdown (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-05-21 23:01:33 Starting - Starting the training job... 2020-05-21 23:01:34 Starting - Launching requested ML instances...... 2020-05-21 23:02:41 Starting - Preparing the instances for training... 2020-05-21 23:03:28 Downloading - Downloading input data... 2020-05-21 23:04:01 Training - Downloading the training image... 2020-05-21 23:04:20 Training - Training image download completed. Training in progress.Arguments: train [2020-05-21:23:04:20:INFO] Running standalone xgboost training. [2020-05-21:23:04:21:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8471.45mb [2020-05-21:23:04:21:INFO] Determined delimiter of CSV input is ',' [23:04:20] S3DistributionType set as FullyReplicated [23:04:22] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-05-21:23:04:23:INFO] Determined delimiter of CSV input is ',' [23:04:23] S3DistributionType set as FullyReplicated [23:04:24] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [23:04:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 2 pruned nodes, max_depth=5 [0]#011train-error:0.312267#011validation-error:0.318 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [23:04:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.294333#011validation-error:0.3008 [23:04:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.276867#011validation-error:0.2857 [23:04:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5 [3]#011train-error:0.265733#011validation-error:0.2792 [23:04:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [4]#011train-error:0.264067#011validation-error:0.2767 [23:04:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [5]#011train-error:0.2588#011validation-error:0.2697 [23:04:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.2522#011validation-error:0.263 [23:04:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.251667#011validation-error:0.2626 [23:04:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.244733#011validation-error:0.2572 [23:04:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [9]#011train-error:0.2414#011validation-error:0.2549 [23:04:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.2344#011validation-error:0.248 [23:04:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.2272#011validation-error:0.2448 [23:04:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [12]#011train-error:0.225933#011validation-error:0.243 [23:04:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [13]#011train-error:0.223867#011validation-error:0.2402 [23:04:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [14]#011train-error:0.217733#011validation-error:0.2374 [23:04:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.216267#011validation-error:0.2329 [23:04:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [16]#011train-error:0.212867#011validation-error:0.2301 [23:04:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [17]#011train-error:0.207333#011validation-error:0.2259 [23:04:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.204133#011validation-error:0.2234 [23:04:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [19]#011train-error:0.2014#011validation-error:0.2208 [23:04:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [20]#011train-error:0.199133#011validation-error:0.2208 [23:04:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.198267#011validation-error:0.2197 [23:04:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.1972#011validation-error:0.2174 [23:04:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [23]#011train-error:0.193333#011validation-error:0.2162 [23:04:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [24]#011train-error:0.190333#011validation-error:0.2135 [23:05:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [25]#011train-error:0.1872#011validation-error:0.2135 [23:05:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [26]#011train-error:0.183867#011validation-error:0.2112 [23:05:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 14 pruned nodes, max_depth=5 [27]#011train-error:0.181333#011validation-error:0.2096 [23:05:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [28]#011train-error:0.1796#011validation-error:0.2081 [23:05:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 8 pruned nodes, max_depth=5 [29]#011train-error:0.175#011validation-error:0.2057 [23:05:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.1754#011validation-error:0.2054 [23:05:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.172667#011validation-error:0.2039 [23:05:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.1714#011validation-error:0.2022 [23:05:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [33]#011train-error:0.171067#011validation-error:0.2006 [23:05:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.1706#011validation-error:0.2009 [23:05:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [35]#011train-error:0.1704#011validation-error:0.2001 [23:05:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [36]#011train-error:0.168467#011validation-error:0.1986 [23:05:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 2 pruned nodes, max_depth=5 [37]#011train-error:0.1666#011validation-error:0.1981 [23:05:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [38]#011train-error:0.1668#011validation-error:0.1978 [23:05:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5 [39]#011train-error:0.164133#011validation-error:0.1968 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? To prevent this, you should have created a test set from the new data you had. Alternatively, you could get additional data and shuffle it in with old data. First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ...................Arguments: serve [2020-05-21 23:13:41 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-05-21 23:13:41 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-05-21 23:13:41 +0000] [1] [INFO] Using worker: gevent [2020-05-21 23:13:41 +0000] [38] [INFO] Booting worker with pid: 38 [2020-05-21 23:13:41 +0000] [39] [INFO] Booting worker with pid: 39 [2020-05-21 23:13:41 +0000] [40] [INFO] Booting worker with pid: 40 [2020-05-21:23:13:41:INFO] Model loaded successfully for worker : 39 [2020-05-21:23:13:41:INFO] Model loaded successfully for worker : 38 [2020-05-21 23:13:41 +0000] [41] [INFO] Booting worker with pid: 41 [2020-05-21:23:13:41:INFO] Model loaded successfully for worker : 40 [2020-05-21:23:13:41:INFO] Model loaded successfully for worker : 41 2020-05-21T23:14:02.336:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-05-21:23:14:05:INFO] Sniff delimiter as ',' [2020-05-21:23:14:05:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:05:INFO] Sniff delimiter as ',' [2020-05-21:23:14:05:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:05:INFO] Sniff delimiter as ',' [2020-05-21:23:14:05:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:05:INFO] Sniff delimiter as ',' [2020-05-21:23:14:05:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:05:INFO] Sniff delimiter as ',' [2020-05-21:23:14:05:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:05:INFO] Sniff delimiter as ',' [2020-05-21:23:14:05:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:05:INFO] Sniff delimiter as ',' [2020-05-21:23:14:05:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:05:INFO] Sniff delimiter as ',' [2020-05-21:23:14:05:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:07:INFO] Sniff delimiter as ',' [2020-05-21:23:14:07:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:07:INFO] Sniff delimiter as ',' [2020-05-21:23:14:07:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:07:INFO] Sniff delimiter as ',' [2020-05-21:23:14:07:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:07:INFO] Sniff delimiter as ',' [2020-05-21:23:14:07:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:08:INFO] Sniff delimiter as ',' [2020-05-21:23:14:08:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:08:INFO] Sniff delimiter as ',' [2020-05-21:23:14:08:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:08:INFO] Sniff delimiter as ',' [2020-05-21:23:14:08:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:08:INFO] Sniff delimiter as ',' [2020-05-21:23:14:08:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:10:INFO] Sniff delimiter as ',' [2020-05-21:23:14:10:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:10:INFO] Sniff delimiter as ',' [2020-05-21:23:14:10:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:10:INFO] Sniff delimiter as ',' [2020-05-21:23:14:10:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:10:INFO] Sniff delimiter as ',' [2020-05-21:23:14:10:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:10:INFO] Sniff delimiter as ',' [2020-05-21:23:14:10:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:10:INFO] Sniff delimiter as ',' [2020-05-21:23:14:10:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:10:INFO] Sniff delimiter as ',' [2020-05-21:23:14:10:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:10:INFO] Sniff delimiter as ',' [2020-05-21:23:14:10:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:12:INFO] Sniff delimiter as ',' [2020-05-21:23:14:12:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:12:INFO] Sniff delimiter as ',' [2020-05-21:23:14:12:INFO] Sniff delimiter as ',' [2020-05-21:23:14:12:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:12:INFO] Sniff delimiter as ',' [2020-05-21:23:14:12:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:12:INFO] Sniff delimiter as ',' [2020-05-21:23:14:12:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:12:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:12:INFO] Sniff delimiter as ',' [2020-05-21:23:14:12:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:13:INFO] Sniff delimiter as ',' [2020-05-21:23:14:13:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:13:INFO] Sniff delimiter as ',' [2020-05-21:23:14:13:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:15:INFO] Sniff delimiter as ',' [2020-05-21:23:14:15:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:15:INFO] Sniff delimiter as ',' [2020-05-21:23:14:15:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:15:INFO] Sniff delimiter as ',' [2020-05-21:23:14:15:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:15:INFO] Sniff delimiter as ',' [2020-05-21:23:14:15:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:15:INFO] Sniff delimiter as ',' [2020-05-21:23:14:15:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:15:INFO] Sniff delimiter as ',' [2020-05-21:23:14:15:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:17:INFO] Sniff delimiter as ',' [2020-05-21:23:14:17:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:17:INFO] Sniff delimiter as ',' [2020-05-21:23:14:17:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:17:INFO] Sniff delimiter as ',' [2020-05-21:23:14:17:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:17:INFO] Sniff delimiter as ',' [2020-05-21:23:14:17:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:17:INFO] Sniff delimiter as ',' [2020-05-21:23:14:17:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:17:INFO] Sniff delimiter as ',' [2020-05-21:23:14:17:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:17:INFO] Sniff delimiter as ',' [2020-05-21:23:14:17:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:17:INFO] Sniff delimiter as ',' [2020-05-21:23:14:17:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:19:INFO] Sniff delimiter as ',' [2020-05-21:23:14:19:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:19:INFO] Sniff delimiter as ',' [2020-05-21:23:14:19:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:20:INFO] Sniff delimiter as ',' [2020-05-21:23:14:20:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:20:INFO] Sniff delimiter as ',' [2020-05-21:23:14:20:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:20:INFO] Sniff delimiter as ',' [2020-05-21:23:14:20:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:20:INFO] Sniff delimiter as ',' [2020-05-21:23:14:20:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:20:INFO] Sniff delimiter as ',' [2020-05-21:23:14:20:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:20:INFO] Sniff delimiter as ',' [2020-05-21:23:14:20:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:22:INFO] Sniff delimiter as ',' [2020-05-21:23:14:22:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:22:INFO] Sniff delimiter as ',' [2020-05-21:23:14:22:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:22:INFO] Sniff delimiter as ',' [2020-05-21:23:14:22:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:22:INFO] Sniff delimiter as ',' [2020-05-21:23:14:22:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:22:INFO] Sniff delimiter as ',' [2020-05-21:23:14:22:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:22:INFO] Sniff delimiter as ',' [2020-05-21:23:14:22:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:22:INFO] Sniff delimiter as ',' [2020-05-21:23:14:22:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:22:INFO] Sniff delimiter as ',' [2020-05-21:23:14:22:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:24:INFO] Sniff delimiter as ',' [2020-05-21:23:14:24:INFO] Sniff delimiter as ',' [2020-05-21:23:14:24:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:24:INFO] Sniff delimiter as ',' [2020-05-21:23:14:24:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:24:INFO] Determined delimiter of CSV input is ',' [2020-05-21:23:14:24:INFO] Sniff delimiter as ',' [2020-05-21:23:14:24:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/366.3 KiB (2.6 MiB/s) with 1 file(s) remaining Completed 366.3 KiB/366.3 KiB (3.7 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-west-1-731892558299/xgboost-2020-05-21-23-10-37-242/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. #test_X = None test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ---------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-08-29 20:55:31-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 9.53MB/s in 12s 2020-08-29 20:55:44 (6.57 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Wrote preprocessed data to cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-08-29 21:25:18 Starting - Starting the training job... 2020-08-29 21:25:20 Starting - Launching requested ML instances...... 2020-08-29 21:26:43 Starting - Preparing the instances for training...... 2020-08-29 21:27:34 Downloading - Downloading input data... 2020-08-29 21:28:13 Training - Training image download completed. Training in progress..Arguments: train [2020-08-29:21:28:14:INFO] Running standalone xgboost training. [2020-08-29:21:28:14:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8488.98mb [2020-08-29:21:28:14:INFO] Determined delimiter of CSV input is ',' [21:28:14] S3DistributionType set as FullyReplicated [21:28:16] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-08-29:21:28:16:INFO] Determined delimiter of CSV input is ',' [21:28:16] S3DistributionType set as FullyReplicated [21:28:17] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [21:28:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [0]#011train-error:0.299067#011validation-error:0.2998 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [21:28:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [1]#011train-error:0.280333#011validation-error:0.2838 [21:28:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [2]#011train-error:0.268133#011validation-error:0.2726 [21:28:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.271333#011validation-error:0.2748 [21:28:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.2606#011validation-error:0.2648 [21:28:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.250933#011validation-error:0.2572 [21:28:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.2394#011validation-error:0.2478 [21:28:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [7]#011train-error:0.2364#011validation-error:0.2416 [21:28:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [8]#011train-error:0.231467#011validation-error:0.2392 [21:28:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [9]#011train-error:0.227067#011validation-error:0.2332 [21:28:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [10]#011train-error:0.219733#011validation-error:0.2277 [21:28:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [11]#011train-error:0.2148#011validation-error:0.224 [21:28:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [12]#011train-error:0.212933#011validation-error:0.2239 [21:28:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [13]#011train-error:0.209067#011validation-error:0.218 [21:28:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [14]#011train-error:0.2044#011validation-error:0.2152 [21:28:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [15]#011train-error:0.1996#011validation-error:0.2125 [21:28:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 16 pruned nodes, max_depth=5 [16]#011train-error:0.198533#011validation-error:0.2133 [21:28:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [17]#011train-error:0.197133#011validation-error:0.2107 [21:28:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [18]#011train-error:0.195933#011validation-error:0.2094 [21:28:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [19]#011train-error:0.190533#011validation-error:0.2045 [21:28:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5 [20]#011train-error:0.1858#011validation-error:0.2034 [21:28:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.1834#011validation-error:0.2016 [21:28:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [22]#011train-error:0.179133#011validation-error:0.1992 [21:28:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [23]#011train-error:0.177333#011validation-error:0.1967 [21:28:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [24]#011train-error:0.1738#011validation-error:0.1943 [21:28:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.1712#011validation-error:0.1928 [21:28:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [26]#011train-error:0.170933#011validation-error:0.1917 [21:28:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [27]#011train-error:0.168933#011validation-error:0.1912 [21:28:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [28]#011train-error:0.166733#011validation-error:0.1892 [21:28:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [29]#011train-error:0.165667#011validation-error:0.1879 [21:28:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.164733#011validation-error:0.1859 [21:29:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 14 pruned nodes, max_depth=5 [31]#011train-error:0.162667#011validation-error:0.1834 [21:29:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [32]#011train-error:0.161533#011validation-error:0.1812 [21:29:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 16 pruned nodes, max_depth=5 [33]#011train-error:0.1594#011validation-error:0.1811 [21:29:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [34]#011train-error:0.1586#011validation-error:0.1806 [21:29:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [35]#011train-error:0.157933#011validation-error:0.1789 [21:29:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [36]#011train-error:0.156333#011validation-error:0.1774 [21:29:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [37]#011train-error:0.155#011validation-error:0.1767 [21:29:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [38]#011train-error:0.1542#011validation-error:0.1752 [21:29:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [39]#011train-error:0.152733#011validation-error:0.1743 [21:29:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 16 pruned nodes, max_depth=5 [40]#011train-error:0.151333#011validation-error:0.173 [21:29:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [41]#011train-error:0.1486#011validation-error:0.1724 [21:29:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [42]#011train-error:0.1482#011validation-error:0.1718 [21:29:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [43]#011train-error:0.1466#011validation-error:0.1709 [21:29:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [44]#011train-error:0.143533#011validation-error:0.1699 [21:29:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5 [45]#011train-error:0.142933#011validation-error:0.1693 [21:29:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [46]#011train-error:0.141867#011validation-error:0.1693 [21:29:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [47]#011train-error:0.139933#011validation-error:0.1692 [21:29:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [48]#011train-error:0.139733#011validation-error:0.1685 [21:29:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [49]#011train-error:0.1382#011validation-error:0.1671 [21:29:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [50]#011train-error:0.1376#011validation-error:0.1676 [21:29:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [51]#011train-error:0.1362#011validation-error:0.1675 [21:29:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 18 pruned nodes, max_depth=5 [52]#011train-error:0.135467#011validation-error:0.1676 [21:29:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [53]#011train-error:0.1348#011validation-error:0.1661 [21:29:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [54]#011train-error:0.1334#011validation-error:0.1642 [21:29:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [55]#011train-error:0.132333#011validation-error:0.1631 [21:29:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [56]#011train-error:0.131867#011validation-error:0.1655 [21:29:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 14 pruned nodes, max_depth=5 [57]#011train-error:0.130733#011validation-error:0.1649 [21:29:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 4 pruned nodes, max_depth=5 [58]#011train-error:0.130467#011validation-error:0.1645 [21:29:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [59]#011train-error:0.129067#011validation-error:0.1634 [21:29:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [60]#011train-error:0.129133#011validation-error:0.1634 [21:29:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [61]#011train-error:0.128867#011validation-error:0.1627 [21:29:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [62]#011train-error:0.127267#011validation-error:0.1622 [21:29:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5 [63]#011train-error:0.126533#011validation-error:0.1612 [21:29:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5 [64]#011train-error:0.125533#011validation-error:0.1599 [21:29:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [65]#011train-error:0.1256#011validation-error:0.1605 [21:29:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [66]#011train-error:0.1246#011validation-error:0.1588 [21:29:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [67]#011train-error:0.124667#011validation-error:0.1589 [21:29:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [68]#011train-error:0.123533#011validation-error:0.159 [21:29:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [69]#011train-error:0.1228#011validation-error:0.1588 [21:29:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [70]#011train-error:0.1228#011validation-error:0.1584 [21:29:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [71]#011train-error:0.122333#011validation-error:0.1581 [21:29:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [72]#011train-error:0.1218#011validation-error:0.1584 [21:29:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [73]#011train-error:0.121067#011validation-error:0.1569 [21:29:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [74]#011train-error:0.119933#011validation-error:0.157 [21:29:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [75]#011train-error:0.119667#011validation-error:0.1559 [21:29:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [76]#011train-error:0.117#011validation-error:0.1564 [21:29:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [77]#011train-error:0.1178#011validation-error:0.1555 [21:30:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [78]#011train-error:0.117333#011validation-error:0.1562 [21:30:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [79]#011train-error:0.115867#011validation-error:0.1559 [21:30:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [80]#011train-error:0.115#011validation-error:0.1557 [21:30:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 18 pruned nodes, max_depth=5 [81]#011train-error:0.113867#011validation-error:0.1543 [21:30:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [82]#011train-error:0.1122#011validation-error:0.1541 [21:30:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [83]#011train-error:0.111933#011validation-error:0.1532 [21:30:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 18 pruned nodes, max_depth=5 [84]#011train-error:0.1124#011validation-error:0.1534 [21:30:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [85]#011train-error:0.111133#011validation-error:0.1528 [21:30:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [86]#011train-error:0.110867#011validation-error:0.1513 [21:30:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [87]#011train-error:0.110533#011validation-error:0.1522 [21:30:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5 [88]#011train-error:0.110133#011validation-error:0.1517 [21:30:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 18 pruned nodes, max_depth=5 [89]#011train-error:0.1098#011validation-error:0.1512 [21:30:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 20 pruned nodes, max_depth=5 [90]#011train-error:0.109133#011validation-error:0.1518 [21:30:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [91]#011train-error:0.108933#011validation-error:0.1514 [21:30:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [92]#011train-error:0.109667#011validation-error:0.1514 [21:30:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [93]#011train-error:0.1082#011validation-error:0.1516 [21:30:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [94]#011train-error:0.108067#011validation-error:0.1513 [21:30:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [95]#011train-error:0.1072#011validation-error:0.1507 [21:30:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5 [96]#011train-error:0.107067#011validation-error:0.1515 [21:30:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5 [97]#011train-error:0.107067#011validation-error:0.1508 [21:30:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 12 pruned nodes, max_depth=5 [98]#011train-error:0.107#011validation-error:0.1505 [21:30:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [99]#011train-error:0.106333#011validation-error:0.1501 [21:30:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [100]#011train-error:0.105067#011validation-error:0.1507 [21:30:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [101]#011train-error:0.104667#011validation-error:0.1503 [21:30:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5 [102]#011train-error:0.104133#011validation-error:0.1497 [21:30:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 10 pruned nodes, max_depth=5 [103]#011train-error:0.104067#011validation-error:0.1493 [21:30:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [104]#011train-error:0.102867#011validation-error:0.1492 [21:30:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [105]#011train-error:0.102267#011validation-error:0.1487 [21:30:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 14 pruned nodes, max_depth=5 [106]#011train-error:0.101867#011validation-error:0.1484 [21:30:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5 [107]#011train-error:0.101467#011validation-error:0.1479 [21:30:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [108]#011train-error:0.100333#011validation-error:0.1477 [21:30:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [109]#011train-error:0.1#011validation-error:0.147 [21:30:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=5 [110]#011train-error:0.100067#011validation-error:0.1469 [21:30:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [111]#011train-error:0.0994#011validation-error:0.1468 [21:30:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [112]#011train-error:0.099133#011validation-error:0.1472 [21:30:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=5 [113]#011train-error:0.099733#011validation-error:0.1471 [21:30:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5 [114]#011train-error:0.098867#011validation-error:0.1469 [21:30:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5 [115]#011train-error:0.099133#011validation-error:0.1466 [21:30:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [116]#011train-error:0.098#011validation-error:0.147 [21:30:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [117]#011train-error:0.097133#011validation-error:0.1462 [21:30:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 6 pruned nodes, max_depth=5 [118]#011train-error:0.097333#011validation-error:0.1473 [21:30:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [119]#011train-error:0.097#011validation-error:0.1465 [21:30:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [120]#011train-error:0.0962#011validation-error:0.1465 [21:30:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [121]#011train-error:0.094933#011validation-error:0.1461 [21:30:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [122]#011train-error:0.095#011validation-error:0.1469 [21:30:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5 [123]#011train-error:0.095#011validation-error:0.1472 [21:30:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [124]#011train-error:0.094667#011validation-error:0.1471 [21:31:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [125]#011train-error:0.094333#011validation-error:0.1464 [21:31:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [126]#011train-error:0.094067#011validation-error:0.1469 [21:31:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5 [127]#011train-error:0.093733#011validation-error:0.1459 [21:31:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [128]#011train-error:0.093133#011validation-error:0.1458 [21:31:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [129]#011train-error:0.092667#011validation-error:0.145 [21:31:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5 [130]#011train-error:0.092733#011validation-error:0.146 [21:31:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [131]#011train-error:0.092133#011validation-error:0.1454 [21:31:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [132]#011train-error:0.092#011validation-error:0.1449 [21:31:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [133]#011train-error:0.091533#011validation-error:0.1442 [21:31:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [134]#011train-error:0.091133#011validation-error:0.1437 [21:31:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 0 pruned nodes, max_depth=5 [135]#011train-error:0.090733#011validation-error:0.1434 [21:31:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5 [136]#011train-error:0.089933#011validation-error:0.1425 [21:31:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 8 pruned nodes, max_depth=5 [137]#011train-error:0.090067#011validation-error:0.1428 [21:31:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5 [138]#011train-error:0.089933#011validation-error:0.1426 [21:31:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 22 pruned nodes, max_depth=5 [139]#011train-error:0.089933#011validation-error:0.1421 [21:31:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [140]#011train-error:0.089467#011validation-error:0.1429 [21:31:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [141]#011train-error:0.0892#011validation-error:0.1422 [21:31:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5 [142]#011train-error:0.089#011validation-error:0.1428 [21:31:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5 [143]#011train-error:0.088467#011validation-error:0.1419 [21:31:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5 [144]#011train-error:0.088467#011validation-error:0.1419 [21:31:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [145]#011train-error:0.088533#011validation-error:0.1422 [21:31:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 6 pruned nodes, max_depth=5 [146]#011train-error:0.088133#011validation-error:0.1421 [21:31:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 14 pruned nodes, max_depth=5 [147]#011train-error:0.087533#011validation-error:0.1424 [21:31:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5 [148]#011train-error:0.087333#011validation-error:0.1425 [21:31:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [149]#011train-error:0.087067#011validation-error:0.1422 [21:31:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 6 pruned nodes, max_depth=5 [150]#011train-error:0.0866#011validation-error:0.1421 [21:31:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5 [151]#011train-error:0.087067#011validation-error:0.1425 [21:31:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [152]#011train-error:0.086#011validation-error:0.1428 [21:31:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 14 pruned nodes, max_depth=5 [153]#011train-error:0.0862#011validation-error:0.1421 Stopping. Best iteration: [143]#011train-error:0.088467#011validation-error:0.1419  2020-08-29 21:31:43 Uploading - Uploading generated training model 2020-08-29 21:31:43 Completed - Training job completed Training seconds: 249 Billable seconds: 249 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .........................Arguments: serve Arguments: serve [2020-08-29 21:36:01 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-29 21:36:01 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-29 21:36:01 +0000] [1] [INFO] Using worker: gevent [2020-08-29 21:36:01 +0000] [36] [INFO] Booting worker with pid: 36 [2020-08-29 21:36:01 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-29 21:36:01 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-29 21:36:01 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-29:21:36:01:INFO] Model loaded successfully for worker : 38 [2020-08-29:21:36:01:INFO] Model loaded successfully for worker : 36 [2020-08-29 21:36:01 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-29 21:36:01 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-29 21:36:01 +0000] [1] [INFO] Using worker: gevent [2020-08-29 21:36:01 +0000] [36] [INFO] Booting worker with pid: 36 [2020-08-29 21:36:01 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-29 21:36:01 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-29 21:36:01 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-29:21:36:01:INFO] Model loaded successfully for worker : 38 [2020-08-29:21:36:01:INFO] Model loaded successfully for worker : 36 [2020-08-29:21:36:01:INFO] Model loaded successfully for worker : 37 [2020-08-29:21:36:01:INFO] Model loaded successfully for worker : 37 [2020-08-29:21:36:01:INFO] Model loaded successfully for worker : 39 [2020-08-29:21:36:02:INFO] Sniff delimiter as ',' [2020-08-29:21:36:02:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:02:INFO] Sniff delimiter as ',' [2020-08-29:21:36:02:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:02:INFO] Sniff delimiter as ',' [2020-08-29:21:36:02:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:01:INFO] Model loaded successfully for worker : 39 [2020-08-29:21:36:02:INFO] Sniff delimiter as ',' [2020-08-29:21:36:02:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:02:INFO] Sniff delimiter as ',' [2020-08-29:21:36:02:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:02:INFO] Sniff delimiter as ',' [2020-08-29:21:36:02:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:03:INFO] Sniff delimiter as ',' [2020-08-29:21:36:03:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:03:INFO] Sniff delimiter as ',' [2020-08-29:21:36:03:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:04:INFO] Sniff delimiter as ',' [2020-08-29:21:36:04:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:04:INFO] Sniff delimiter as ',' [2020-08-29:21:36:04:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:04:INFO] Sniff delimiter as ',' [2020-08-29:21:36:04:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:05:INFO] Sniff delimiter as ',' [2020-08-29:21:36:05:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:05:INFO] Sniff delimiter as ',' [2020-08-29:21:36:05:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:04:INFO] Sniff delimiter as ',' [2020-08-29:21:36:04:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:05:INFO] Sniff delimiter as ',' [2020-08-29:21:36:05:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:05:INFO] Sniff delimiter as ',' [2020-08-29:21:36:05:INFO] Determined delimiter of CSV input is ',' 2020-08-29T21:36:01.813:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-08-29:21:36:06:INFO] Sniff delimiter as ',' [2020-08-29:21:36:06:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:06:INFO] Sniff delimiter as ',' [2020-08-29:21:36:06:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:08:INFO] Sniff delimiter as ',' [2020-08-29:21:36:08:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:08:INFO] Sniff delimiter as ',' [2020-08-29:21:36:08:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:08:INFO] Sniff delimiter as ',' [2020-08-29:21:36:08:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:08:INFO] Sniff delimiter as ',' [2020-08-29:21:36:08:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:08:INFO] Sniff delimiter as ',' [2020-08-29:21:36:08:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:08:INFO] Sniff delimiter as ',' [2020-08-29:21:36:08:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:08:INFO] Sniff delimiter as ',' [2020-08-29:21:36:08:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:08:INFO] Sniff delimiter as ',' [2020-08-29:21:36:08:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:10:INFO] Sniff delimiter as ',' [2020-08-29:21:36:10:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:10:INFO] Sniff delimiter as ',' [2020-08-29:21:36:10:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:12:INFO] Sniff delimiter as ',' [2020-08-29:21:36:12:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:13:INFO] Sniff delimiter as ',' [2020-08-29:21:36:13:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:13:INFO] Sniff delimiter as ',' [2020-08-29:21:36:13:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:13:INFO] Sniff delimiter as ',' [2020-08-29:21:36:13:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:12:INFO] Sniff delimiter as ',' [2020-08-29:21:36:12:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:13:INFO] Sniff delimiter as ',' [2020-08-29:21:36:13:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:13:INFO] Sniff delimiter as ',' [2020-08-29:21:36:13:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:13:INFO] Sniff delimiter as ',' [2020-08-29:21:36:13:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:15:INFO] Sniff delimiter as ',' [2020-08-29:21:36:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:15:INFO] Sniff delimiter as ',' [2020-08-29:21:36:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:15:INFO] Sniff delimiter as ',' [2020-08-29:21:36:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:15:INFO] Sniff delimiter as ',' [2020-08-29:21:36:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:15:INFO] Sniff delimiter as ',' [2020-08-29:21:36:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:15:INFO] Sniff delimiter as ',' [2020-08-29:21:36:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:15:INFO] Sniff delimiter as ',' [2020-08-29:21:36:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:15:INFO] Sniff delimiter as ',' [2020-08-29:21:36:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:17:INFO] Sniff delimiter as ',' [2020-08-29:21:36:17:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:18:INFO] Sniff delimiter as ',' [2020-08-29:21:36:18:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:18:INFO] Sniff delimiter as ',' [2020-08-29:21:36:18:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:18:INFO] Sniff delimiter as ',' [2020-08-29:21:36:17:INFO] Sniff delimiter as ',' [2020-08-29:21:36:17:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:18:INFO] Sniff delimiter as ',' [2020-08-29:21:36:18:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:18:INFO] Sniff delimiter as ',' [2020-08-29:21:36:18:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:18:INFO] Sniff delimiter as ',' [2020-08-29:21:36:18:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:18:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:20:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:20:INFO] Sniff delimiter as ',' [2020-08-29:21:36:20:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:20:INFO] Sniff delimiter as ',' [2020-08-29:21:36:20:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:20:INFO] Sniff delimiter as ',' [2020-08-29:21:36:20:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:20:INFO] Sniff delimiter as ',' [2020-08-29:21:36:20:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:20:INFO] Sniff delimiter as ',' [2020-08-29:21:36:20:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:20:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:20:INFO] Sniff delimiter as ',' [2020-08-29:21:36:20:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:22:INFO] Sniff delimiter as ',' [2020-08-29:21:36:22:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:22:INFO] Sniff delimiter as ',' [2020-08-29:21:36:22:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:22:INFO] Sniff delimiter as ',' [2020-08-29:21:36:22:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:22:INFO] Sniff delimiter as ',' [2020-08-29:21:36:22:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:23:INFO] Sniff delimiter as ',' [2020-08-29:21:36:23:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:23:INFO] Sniff delimiter as ',' [2020-08-29:21:36:23:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:23:INFO] Sniff delimiter as ',' [2020-08-29:21:36:23:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:23:INFO] Sniff delimiter as ',' [2020-08-29:21:36:23:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:25:INFO] Sniff delimiter as ',' [2020-08-29:21:36:25:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:25:INFO] Sniff delimiter as ',' [2020-08-29:21:36:25:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:25:INFO] Sniff delimiter as ',' [2020-08-29:21:36:25:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:25:INFO] Sniff delimiter as ',' [2020-08-29:21:36:25:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:25:INFO] Sniff delimiter as ',' [2020-08-29:21:36:25:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:36:25:INFO] Sniff delimiter as ',' [2020-08-29:21:36:25:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-eu-west-1-100264508876/xgboost-2020-08-29-21-32-01-941/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ...........................Arguments: serve Arguments: serve [2020-08-29 21:44:22 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-29 21:44:22 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-29 21:44:22 +0000] [1] [INFO] Using worker: gevent [2020-08-29 21:44:22 +0000] [36] [INFO] Booting worker with pid: 36 [2020-08-29 21:44:22 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-29:21:44:22:INFO] Model loaded successfully for worker : 36 [2020-08-29 21:44:22 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-29 21:44:22 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-29:21:44:22:INFO] Model loaded successfully for worker : 37 [2020-08-29:21:44:23:INFO] Model loaded successfully for worker : 38 [2020-08-29:21:44:23:INFO] Model loaded successfully for worker : 39 [2020-08-29:21:44:23:INFO] Sniff delimiter as ',' [2020-08-29:21:44:23:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:23:INFO] Sniff delimiter as ',' [2020-08-29:21:44:23:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:23:INFO] Sniff delimiter as ',' [2020-08-29:21:44:23:INFO] Determined delimiter of CSV input is ',' [2020-08-29 21:44:22 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-29 21:44:22 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-29 21:44:22 +0000] [1] [INFO] Using worker: gevent [2020-08-29 21:44:22 +0000] [36] [INFO] Booting worker with pid: 36 [2020-08-29 21:44:22 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-29:21:44:22:INFO] Model loaded successfully for worker : 36 [2020-08-29 21:44:22 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-29 21:44:22 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-29:21:44:22:INFO] Model loaded successfully for worker : 37 [2020-08-29:21:44:23:INFO] Model loaded successfully for worker : 38 [2020-08-29:21:44:23:INFO] Model loaded successfully for worker : 39 [2020-08-29:21:44:23:INFO] Sniff delimiter as ',' [2020-08-29:21:44:23:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:23:INFO] Sniff delimiter as ',' [2020-08-29:21:44:23:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:23:INFO] Sniff delimiter as ',' [2020-08-29:21:44:23:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:25:INFO] Sniff delimiter as ',' [2020-08-29:21:44:25:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:25:INFO] Sniff delimiter as ',' [2020-08-29:21:44:25:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:25:INFO] Sniff delimiter as ',' [2020-08-29:21:44:25:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:25:INFO] Sniff delimiter as ',' [2020-08-29:21:44:25:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:26:INFO] Sniff delimiter as ',' [2020-08-29:21:44:26:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:26:INFO] Sniff delimiter as ',' [2020-08-29:21:44:26:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:26:INFO] Sniff delimiter as ',' [2020-08-29:21:44:26:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:26:INFO] Sniff delimiter as ',' [2020-08-29:21:44:26:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:26:INFO] Sniff delimiter as ',' [2020-08-29:21:44:26:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:26:INFO] Sniff delimiter as ',' [2020-08-29:21:44:26:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:26:INFO] Sniff delimiter as ',' [2020-08-29:21:44:26:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:26:INFO] Sniff delimiter as ',' [2020-08-29:21:44:26:INFO] Determined delimiter of CSV input is ',' 2020-08-29T21:44:22.842:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-08-29:21:44:28:INFO] Sniff delimiter as ',' [2020-08-29:21:44:28:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:28:INFO] Sniff delimiter as ',' [2020-08-29:21:44:28:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:28:INFO] Sniff delimiter as ',' [2020-08-29:21:44:28:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:29:INFO] Sniff delimiter as ',' [2020-08-29:21:44:29:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:29:INFO] Sniff delimiter as ',' [2020-08-29:21:44:29:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:28:INFO] Sniff delimiter as ',' [2020-08-29:21:44:28:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:29:INFO] Sniff delimiter as ',' [2020-08-29:21:44:29:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:29:INFO] Sniff delimiter as ',' [2020-08-29:21:44:29:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:31:INFO] Sniff delimiter as ',' [2020-08-29:21:44:31:INFO] Sniff delimiter as ',' [2020-08-29:21:44:31:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:31:INFO] Sniff delimiter as ',' [2020-08-29:21:44:31:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:31:INFO] Sniff delimiter as ',' [2020-08-29:21:44:31:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:31:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:31:INFO] Sniff delimiter as ',' [2020-08-29:21:44:31:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:31:INFO] Sniff delimiter as ',' [2020-08-29:21:44:31:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:33:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:33:INFO] Sniff delimiter as ',' [2020-08-29:21:44:33:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:34:INFO] Sniff delimiter as ',' [2020-08-29:21:44:34:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:33:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:33:INFO] Sniff delimiter as ',' [2020-08-29:21:44:33:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:34:INFO] Sniff delimiter as ',' [2020-08-29:21:44:34:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:36:INFO] Sniff delimiter as ',' [2020-08-29:21:44:36:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:36:INFO] Sniff delimiter as ',' [2020-08-29:21:44:36:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:36:INFO] Sniff delimiter as ',' [2020-08-29:21:44:36:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:36:INFO] Sniff delimiter as ',' [2020-08-29:21:44:36:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:36:INFO] Sniff delimiter as ',' [2020-08-29:21:44:36:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:36:INFO] Sniff delimiter as ',' [2020-08-29:21:44:36:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:36:INFO] Sniff delimiter as ',' [2020-08-29:21:44:36:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:36:INFO] Sniff delimiter as ',' [2020-08-29:21:44:36:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:38:INFO] Sniff delimiter as ',' [2020-08-29:21:44:38:INFO] Sniff delimiter as ',' [2020-08-29:21:44:38:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:38:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:38:INFO] Sniff delimiter as ',' [2020-08-29:21:44:38:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:38:INFO] Sniff delimiter as ',' [2020-08-29:21:44:38:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:39:INFO] Sniff delimiter as ',' [2020-08-29:21:44:39:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:38:INFO] Sniff delimiter as ',' [2020-08-29:21:44:38:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:38:INFO] Sniff delimiter as ',' [2020-08-29:21:44:38:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:39:INFO] Sniff delimiter as ',' [2020-08-29:21:44:39:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:40:INFO] Sniff delimiter as ',' [2020-08-29:21:44:40:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:41:INFO] Sniff delimiter as ',' [2020-08-29:21:44:41:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:41:INFO] Sniff delimiter as ',' [2020-08-29:21:44:41:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:41:INFO] Sniff delimiter as ',' [2020-08-29:21:44:41:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:40:INFO] Sniff delimiter as ',' [2020-08-29:21:44:40:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:41:INFO] Sniff delimiter as ',' [2020-08-29:21:44:41:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:41:INFO] Sniff delimiter as ',' [2020-08-29:21:44:41:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:41:INFO] Sniff delimiter as ',' [2020-08-29:21:44:41:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:45:INFO] Sniff delimiter as ',' [2020-08-29:21:44:45:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:46:INFO] Sniff delimiter as ',' [2020-08-29:21:44:46:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:46:INFO] Sniff delimiter as ',' [2020-08-29:21:44:46:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:45:INFO] Sniff delimiter as ',' [2020-08-29:21:44:45:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:46:INFO] Sniff delimiter as ',' [2020-08-29:21:44:46:INFO] Determined delimiter of CSV input is ',' [2020-08-29:21:44:46:INFO] Sniff delimiter as ',' [2020-08-29:21:44:46:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-eu-west-1-100264508876/xgboost-2020-08-29-21-39-41-391/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(instance_type='ml.m4.xlarge', initial_instance_count=1) ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2020-08-29-21-25-18-034 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['look', 'cute', 'simpl', 'comedi', 'pass', 'time', 'choos', 'film', 'prove', 'enorm', 'mistak', 'write', 'singl', 'good', 'thing', 'first', 'script', 'stupid', 'funni', 'reli', 'tire', 'recycl', 'joke', 'fart', 'turtl', 'laugh', 'book', 'funni', 'pathet', 'low', 'budget', 'effect', 'even', 'call', 'effect', 'horribl', 'cinematographi', 'mani', 'place', 'feel', 'almost', 'like', 'indi', 'film', 'shot', 'money', 'act', 'feel', 'sorri', 'actor', 'pamela', 'anderson', 'denis', 'richard', 'desper', 'money', 'agre', 'take', 'part', 'look', 'recent', 'filmographi', 'would', 'appear', 'despit', 'outfit', 'pamela', 'show', 'age', 'whole', 'even', 'come', 'across', 'sexi', 'let', 'alon', 'funni', 'movi', 'even', 'bad', 'funni', 'categori', 'bad', 'everybodi', 'involv', 'sick', 'avoid', 'banana'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:484: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'substanti', 'motorcycl', 'omin', 'hostil', 'sophi', 'weaker', '21st', 'vastli'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'banana', 'casper', 'modest', 'rapidli', 'sparkl', 'evan', 'playboy', 'mice'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. ###Code new_words = new_vocabulary - original_vocabulary new_words new_words_count = {} for word in list(new_words): new_words_count[word] = new_vectorizer.vocabulary_[word] new_words_count original_words = original_vocabulary - new_vocabulary original_words original_words_count = {} for word in list(original_words): original_words_count[word] = vectorizer.vocabulary_[word] original_words_count ###Output _____no_output_____ ###Markdown So as we can see from above the words in new data has changed from the words in the old data and those words are used too much (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, role, train_instance_count=1, train_instance_type='ml.m4.xlarge', output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, gamma=4, eta=0.2, min_child_weight=6, early_stopping_rounds=10, num_round=500, subsample=0.8, silent=0, objective='binary:logistic') ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-08-29 22:39:58 Starting - Starting the training job... 2020-08-29 22:40:00 Starting - Launching requested ML instances...... 2020-08-29 22:41:23 Starting - Preparing the instances for training...... 2020-08-29 22:42:17 Downloading - Downloading input data... 2020-08-29 22:42:56 Training - Training image download completed. Training in progress..Arguments: train [2020-08-29:22:42:56:INFO] Running standalone xgboost training. [2020-08-29:22:42:56:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8483.28mb [2020-08-29:22:42:56:INFO] Determined delimiter of CSV input is ',' [22:42:56] S3DistributionType set as FullyReplicated [22:42:58] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-08-29:22:42:58:INFO] Determined delimiter of CSV input is ',' [22:42:58] S3DistributionType set as FullyReplicated [22:42:59] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [22:43:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 8 pruned nodes, max_depth=5 [0]#011train-error:0.302933#011validation-error:0.3027 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [22:43:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5 [1]#011train-error:0.277267#011validation-error:0.2818 [22:43:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.2778#011validation-error:0.2803 [22:43:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [3]#011train-error:0.2698#011validation-error:0.273 [22:43:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.261133#011validation-error:0.2669 [22:43:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [5]#011train-error:0.261067#011validation-error:0.2645 [22:43:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 8 pruned nodes, max_depth=5 [6]#011train-error:0.2474#011validation-error:0.2534 [22:43:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [7]#011train-error:0.2404#011validation-error:0.2482 [22:43:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [8]#011train-error:0.2392#011validation-error:0.2463 [22:43:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [9]#011train-error:0.233133#011validation-error:0.242 [22:43:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.229533#011validation-error:0.2403 [22:43:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [11]#011train-error:0.224933#011validation-error:0.2344 [22:43:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.217467#011validation-error:0.2282 [22:43:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 16 pruned nodes, max_depth=5 [13]#011train-error:0.2146#011validation-error:0.227 [22:43:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [14]#011train-error:0.212467#011validation-error:0.2268 [22:43:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [15]#011train-error:0.209133#011validation-error:0.2221 [22:43:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.2062#011validation-error:0.2173 [22:43:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [17]#011train-error:0.203867#011validation-error:0.2181 [22:43:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.201067#011validation-error:0.2163 [22:43:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [19]#011train-error:0.200133#011validation-error:0.212 [22:43:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [20]#011train-error:0.200333#011validation-error:0.2116 [22:43:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [21]#011train-error:0.196933#011validation-error:0.207 [22:43:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [22]#011train-error:0.1934#011validation-error:0.2044 [22:43:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [23]#011train-error:0.192533#011validation-error:0.2044 [22:43:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [24]#011train-error:0.1898#011validation-error:0.2021 [22:43:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.1868#011validation-error:0.1992 [22:43:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [26]#011train-error:0.185133#011validation-error:0.197 [22:43:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [27]#011train-error:0.1822#011validation-error:0.198 [22:43:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [28]#011train-error:0.180333#011validation-error:0.1968 [22:43:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.178667#011validation-error:0.1947 [22:43:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 16 pruned nodes, max_depth=5 [30]#011train-error:0.177267#011validation-error:0.1933 [22:43:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.1762#011validation-error:0.1933 [22:43:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.174733#011validation-error:0.1964 [22:43:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.172333#011validation-error:0.1949 [22:43:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [34]#011train-error:0.172333#011validation-error:0.1927 [22:43:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [35]#011train-error:0.172133#011validation-error:0.1911 [22:43:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [36]#011train-error:0.1722#011validation-error:0.1913 [22:43:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.171#011validation-error:0.1902 [22:43:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.168267#011validation-error:0.19 [22:43:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [39]#011train-error:0.168133#011validation-error:0.1899 [22:43:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [40]#011train-error:0.167133#011validation-error:0.189 [22:43:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [41]#011train-error:0.167333#011validation-error:0.1888 [22:43:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [42]#011train-error:0.166467#011validation-error:0.1886 [22:43:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [43]#011train-error:0.165333#011validation-error:0.1876 [22:44:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5 [44]#011train-error:0.164267#011validation-error:0.1869 [22:44:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [45]#011train-error:0.162533#011validation-error:0.1857 [22:44:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 10 pruned nodes, max_depth=5 [46]#011train-error:0.162#011validation-error:0.1855 [22:44:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [47]#011train-error:0.160133#011validation-error:0.1857 [22:44:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 6 pruned nodes, max_depth=5 [48]#011train-error:0.159733#011validation-error:0.1854 [22:44:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [49]#011train-error:0.158333#011validation-error:0.1861 [22:44:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [50]#011train-error:0.1578#011validation-error:0.1851 [22:44:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [51]#011train-error:0.157667#011validation-error:0.184 [22:44:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [52]#011train-error:0.156667#011validation-error:0.184 [22:44:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [53]#011train-error:0.155867#011validation-error:0.1822 [22:44:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [54]#011train-error:0.1544#011validation-error:0.1828 [22:44:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [55]#011train-error:0.153333#011validation-error:0.1816 [22:44:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [56]#011train-error:0.1526#011validation-error:0.1809 [22:44:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [57]#011train-error:0.151333#011validation-error:0.1814 [22:44:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [58]#011train-error:0.1504#011validation-error:0.1814 [22:44:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [59]#011train-error:0.1492#011validation-error:0.1811 [22:44:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [60]#011train-error:0.148467#011validation-error:0.1816 [22:44:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [61]#011train-error:0.148#011validation-error:0.1819 [22:44:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5 [62]#011train-error:0.147733#011validation-error:0.1822 [22:44:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [63]#011train-error:0.1474#011validation-error:0.1823 [22:44:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5 [64]#011train-error:0.146933#011validation-error:0.1812 [22:44:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [65]#011train-error:0.146733#011validation-error:0.1809 [22:44:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [66]#011train-error:0.146533#011validation-error:0.1805 [22:44:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5 [67]#011train-error:0.145067#011validation-error:0.1812 [22:44:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 12 pruned nodes, max_depth=5 [68]#011train-error:0.1446#011validation-error:0.1801 [22:44:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [69]#011train-error:0.1442#011validation-error:0.1804 [22:44:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [70]#011train-error:0.1434#011validation-error:0.1804 [22:44:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [71]#011train-error:0.1432#011validation-error:0.1803 [22:44:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [72]#011train-error:0.142#011validation-error:0.1791 [22:44:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 8 pruned nodes, max_depth=5 [73]#011train-error:0.142#011validation-error:0.1801 [22:44:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 2 pruned nodes, max_depth=5 [74]#011train-error:0.1418#011validation-error:0.1797 [22:44:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 8 pruned nodes, max_depth=5 [75]#011train-error:0.141333#011validation-error:0.1786 [22:44:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5 [76]#011train-error:0.1394#011validation-error:0.1772 [22:44:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 8 pruned nodes, max_depth=5 [77]#011train-error:0.1398#011validation-error:0.1768 [22:44:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5 [78]#011train-error:0.139133#011validation-error:0.1775 [22:44:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5 [79]#011train-error:0.138467#011validation-error:0.1772 [22:44:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [80]#011train-error:0.137333#011validation-error:0.1766 [22:44:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [81]#011train-error:0.1366#011validation-error:0.1764 [22:44:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [82]#011train-error:0.136467#011validation-error:0.1764 [22:44:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [83]#011train-error:0.1356#011validation-error:0.1755 [22:44:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 4 pruned nodes, max_depth=5 [84]#011train-error:0.135333#011validation-error:0.1757 [22:44:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [85]#011train-error:0.1346#011validation-error:0.1773 [22:44:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [86]#011train-error:0.1342#011validation-error:0.1766 [22:44:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5 [87]#011train-error:0.133#011validation-error:0.1767 [22:44:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [88]#011train-error:0.132267#011validation-error:0.1752 [22:44:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 2 pruned nodes, max_depth=5 [89]#011train-error:0.131933#011validation-error:0.1748 [22:44:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5 [90]#011train-error:0.132#011validation-error:0.1745 [22:45:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [91]#011train-error:0.130667#011validation-error:0.1756 [22:45:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 2 pruned nodes, max_depth=5 [92]#011train-error:0.1308#011validation-error:0.1764 [22:45:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 0 pruned nodes, max_depth=5 [93]#011train-error:0.131#011validation-error:0.1765 [22:45:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 0 pruned nodes, max_depth=5 [94]#011train-error:0.129#011validation-error:0.1769 [22:45:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 4 pruned nodes, max_depth=5 [95]#011train-error:0.129133#011validation-error:0.1768 [22:45:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [96]#011train-error:0.1298#011validation-error:0.1768 [22:45:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [97]#011train-error:0.129533#011validation-error:0.1764 [22:45:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 4 pruned nodes, max_depth=5 [98]#011train-error:0.128133#011validation-error:0.1767 [22:45:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [99]#011train-error:0.128333#011validation-error:0.1769 [22:45:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 4 pruned nodes, max_depth=5 [100]#011train-error:0.128067#011validation-error:0.1768 Stopping. Best iteration: [90]#011train-error:0.132#011validation-error:0.1745  2020-08-29 22:45:20 Uploading - Uploading generated training model 2020-08-29 22:45:20 Completed - Training job completed Training seconds: 183 Billable seconds: 183 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? well as we already trained our model to all the data it suppose to fit it well and the accuracy of our model will not represents the true accuracy, it should be tested with unsean data First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count=1, instance_type='ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ..........................Arguments: serve Arguments: serve [2020-08-29 22:51:59 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-29 22:51:59 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-29 22:51:59 +0000] [1] [INFO] Using worker: gevent [2020-08-29 22:51:59 +0000] [36] [INFO] Booting worker with pid: 36 [2020-08-29 22:51:59 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-29 22:51:59 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-29 22:51:59 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-29 22:51:59 +0000] [1] [INFO] Using worker: gevent [2020-08-29 22:51:59 +0000] [36] [INFO] Booting worker with pid: 36 [2020-08-29 22:51:59 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-29:22:51:59:INFO] Model loaded successfully for worker : 36 [2020-08-29:22:51:59:INFO] Model loaded successfully for worker : 37 [2020-08-29 22:51:59 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-29 22:52:00 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-29:22:52:00:INFO] Model loaded successfully for worker : 38 [2020-08-29:22:51:59:INFO] Model loaded successfully for worker : 36 [2020-08-29:22:51:59:INFO] Model loaded successfully for worker : 37 [2020-08-29 22:51:59 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-29 22:52:00 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-29:22:52:00:INFO] Model loaded successfully for worker : 38 [2020-08-29:22:52:00:INFO] Model loaded successfully for worker : 39 [2020-08-29:22:52:00:INFO] Sniff delimiter as ',' [2020-08-29:22:52:00:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:00:INFO] Sniff delimiter as ',' [2020-08-29:22:52:00:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:00:INFO] Sniff delimiter as ',' [2020-08-29:22:52:00:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:00:INFO] Sniff delimiter as ',' [2020-08-29:22:52:00:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:00:INFO] Model loaded successfully for worker : 39 [2020-08-29:22:52:00:INFO] Sniff delimiter as ',' [2020-08-29:22:52:00:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:00:INFO] Sniff delimiter as ',' [2020-08-29:22:52:00:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:00:INFO] Sniff delimiter as ',' [2020-08-29:22:52:00:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:00:INFO] Sniff delimiter as ',' [2020-08-29:22:52:00:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:02:INFO] Sniff delimiter as ',' [2020-08-29:22:52:02:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:02:INFO] Sniff delimiter as ',' [2020-08-29:22:52:02:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:03:INFO] Sniff delimiter as ',' [2020-08-29:22:52:02:INFO] Sniff delimiter as ',' [2020-08-29:22:52:02:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:02:INFO] Sniff delimiter as ',' [2020-08-29:22:52:02:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:03:INFO] Sniff delimiter as ',' [2020-08-29:22:52:03:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:03:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:03:INFO] Sniff delimiter as ',' [2020-08-29:22:52:03:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:03:INFO] Sniff delimiter as ',' [2020-08-29:22:52:03:INFO] Determined delimiter of CSV input is ',' 2020-08-29T22:51:59.934:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-08-29:22:52:05:INFO] Sniff delimiter as ',' [2020-08-29:22:52:05:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:05:INFO] Sniff delimiter as ',' [2020-08-29:22:52:05:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:05:INFO] Sniff delimiter as ',' [2020-08-29:22:52:05:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:05:INFO] Sniff delimiter as ',' [2020-08-29:22:52:05:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:05:INFO] Sniff delimiter as ',' [2020-08-29:22:52:05:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:05:INFO] Sniff delimiter as ',' [2020-08-29:22:52:05:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:05:INFO] Sniff delimiter as ',' [2020-08-29:22:52:05:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:05:INFO] Sniff delimiter as ',' [2020-08-29:22:52:05:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:07:INFO] Sniff delimiter as ',' [2020-08-29:22:52:07:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:07:INFO] Sniff delimiter as ',' [2020-08-29:22:52:07:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:07:INFO] Sniff delimiter as ',' [2020-08-29:22:52:07:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:07:INFO] Sniff delimiter as ',' [2020-08-29:22:52:07:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:07:INFO] Sniff delimiter as ',' [2020-08-29:22:52:07:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:07:INFO] Sniff delimiter as ',' [2020-08-29:22:52:07:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:10:INFO] Sniff delimiter as ',' [2020-08-29:22:52:10:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:10:INFO] Sniff delimiter as ',' [2020-08-29:22:52:10:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:10:INFO] Sniff delimiter as ',' [2020-08-29:22:52:10:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:10:INFO] Sniff delimiter as ',' [2020-08-29:22:52:10:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:12:INFO] Sniff delimiter as ',' [2020-08-29:22:52:12:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:12:INFO] Sniff delimiter as ',' [2020-08-29:22:52:12:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:12:INFO] Sniff delimiter as ',' [2020-08-29:22:52:12:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:12:INFO] Sniff delimiter as ',' [2020-08-29:22:52:12:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:12:INFO] Sniff delimiter as ',' [2020-08-29:22:52:12:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:12:INFO] Sniff delimiter as ',' [2020-08-29:22:52:12:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:12:INFO] Sniff delimiter as ',' [2020-08-29:22:52:12:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:12:INFO] Sniff delimiter as ',' [2020-08-29:22:52:12:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:14:INFO] Sniff delimiter as ',' [2020-08-29:22:52:14:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:14:INFO] Sniff delimiter as ',' [2020-08-29:22:52:14:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:15:INFO] Sniff delimiter as ',' [2020-08-29:22:52:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:15:INFO] Sniff delimiter as ',' [2020-08-29:22:52:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:15:INFO] Sniff delimiter as ',' [2020-08-29:22:52:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:15:INFO] Sniff delimiter as ',' [2020-08-29:22:52:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:15:INFO] Sniff delimiter as ',' [2020-08-29:22:52:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:15:INFO] Sniff delimiter as ',' [2020-08-29:22:52:15:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:17:INFO] Sniff delimiter as ',' [2020-08-29:22:52:17:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:17:INFO] Sniff delimiter as ',' [2020-08-29:22:52:17:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:17:INFO] Sniff delimiter as ',' [2020-08-29:22:52:17:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:17:INFO] Sniff delimiter as ',' [2020-08-29:22:52:17:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:17:INFO] Sniff delimiter as ',' [2020-08-29:22:52:17:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:17:INFO] Sniff delimiter as ',' [2020-08-29:22:52:17:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:17:INFO] Sniff delimiter as ',' [2020-08-29:22:52:17:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:17:INFO] Sniff delimiter as ',' [2020-08-29:22:52:17:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:22:INFO] Sniff delimiter as ',' [2020-08-29:22:52:22:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:22:INFO] Sniff delimiter as ',' [2020-08-29:22:52:22:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:22:INFO] Sniff delimiter as ',' [2020-08-29:22:52:22:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:22:INFO] Sniff delimiter as ',' [2020-08-29:22:52:22:INFO] Sniff delimiter as ',' [2020-08-29:22:52:22:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:22:INFO] Sniff delimiter as ',' [2020-08-29:22:52:22:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:22:INFO] Sniff delimiter as ',' [2020-08-29:22:52:22:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:22:INFO] Sniff delimiter as ',' [2020-08-29:22:52:22:INFO] Determined delimiter of CSV input is ',' [2020-08-29:22:52:22:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-eu-west-1-100264508876/xgboost-2020-08-29-22-47-49-899/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir import pandas as pd data_dir = '../data/sentiment_update' predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime('%Y-%m-%d-%H-%M-%S', gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-08-06 07:14:19-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 23.7MB/s in 4.0s 2020-08-06 07:14:23 (19.9 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) xgb._current_job_name xgb_attach = sagemaker.estimator.Estimator.attach('xgboost-2020-08-06-07-20-00-171') xgb_attach xgb_attach xgb_attach.__dir__() ###Output _____no_output_____ ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') # i can use xgb_attach ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ...........................2020-08-06T07:37:16.779:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-08-06 07:37:16 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-06 07:37:16 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-06 07:37:16 +0000] [1] [INFO] Using worker: gevent [2020-08-06 07:37:16 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-06 07:37:16 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-06 07:37:16 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-06 07:37:16 +0000] [40] [INFO] Booting worker with pid: 40 [2020-08-06:07:37:16:INFO] Model loaded successfully for worker : 38 [2020-08-06:07:37:16:INFO] Model loaded successfully for worker : 37 Arguments: serve [2020-08-06 07:37:16 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-06 07:37:16 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-06 07:37:16 +0000] [1] [INFO] Using worker: gevent [2020-08-06 07:37:16 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-06 07:37:16 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-06 07:37:16 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-06 07:37:16 +0000] [40] [INFO] Booting worker with pid: 40 [2020-08-06:07:37:16:INFO] Model loaded successfully for worker : 38 [2020-08-06:07:37:16:INFO] Model loaded successfully for worker : 37 [2020-08-06:07:37:16:INFO] Model loaded successfully for worker : 39 [2020-08-06:07:37:16:INFO] Model loaded successfully for worker : 40 [2020-08-06:07:37:17:INFO] Sniff delimiter as ',' [2020-08-06:07:37:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:17:INFO] Sniff delimiter as ',' [2020-08-06:07:37:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:17:INFO] Sniff delimiter as ',' [2020-08-06:07:37:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:17:INFO] Sniff delimiter as ',' [2020-08-06:07:37:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:16:INFO] Model loaded successfully for worker : 39 [2020-08-06:07:37:16:INFO] Model loaded successfully for worker : 40 [2020-08-06:07:37:17:INFO] Sniff delimiter as ',' [2020-08-06:07:37:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:17:INFO] Sniff delimiter as ',' [2020-08-06:07:37:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:17:INFO] Sniff delimiter as ',' [2020-08-06:07:37:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:17:INFO] Sniff delimiter as ',' [2020-08-06:07:37:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:19:INFO] Sniff delimiter as ',' [2020-08-06:07:37:19:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:19:INFO] Sniff delimiter as ',' [2020-08-06:07:37:19:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:20:INFO] Sniff delimiter as ',' [2020-08-06:07:37:20:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:19:INFO] Sniff delimiter as ',' [2020-08-06:07:37:19:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:19:INFO] Sniff delimiter as ',' [2020-08-06:07:37:19:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:20:INFO] Sniff delimiter as ',' [2020-08-06:07:37:20:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:20:INFO] Sniff delimiter as ',' [2020-08-06:07:37:20:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:20:INFO] Sniff delimiter as ',' [2020-08-06:07:37:20:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:21:INFO] Sniff delimiter as ',' [2020-08-06:07:37:21:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:21:INFO] Sniff delimiter as ',' [2020-08-06:07:37:21:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:22:INFO] Sniff delimiter as ',' [2020-08-06:07:37:22:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:22:INFO] Sniff delimiter as ',' [2020-08-06:07:37:22:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:22:INFO] Sniff delimiter as ',' [2020-08-06:07:37:22:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:22:INFO] Sniff delimiter as ',' [2020-08-06:07:37:22:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:24:INFO] Sniff delimiter as ',' [2020-08-06:07:37:24:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:24:INFO] Sniff delimiter as ',' [2020-08-06:07:37:24:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:24:INFO] Sniff delimiter as ',' [2020-08-06:07:37:24:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:24:INFO] Sniff delimiter as ',' [2020-08-06:07:37:24:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:24:INFO] Sniff delimiter as ',' [2020-08-06:07:37:24:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:24:INFO] Sniff delimiter as ',' [2020-08-06:07:37:24:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:25:INFO] Sniff delimiter as ',' [2020-08-06:07:37:25:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:25:INFO] Sniff delimiter as ',' [2020-08-06:07:37:25:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:26:INFO] Sniff delimiter as ',' [2020-08-06:07:37:26:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:27:INFO] Sniff delimiter as ',' [2020-08-06:07:37:27:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:26:INFO] Sniff delimiter as ',' [2020-08-06:07:37:26:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:27:INFO] Sniff delimiter as ',' [2020-08-06:07:37:27:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:27:INFO] Sniff delimiter as ',' [2020-08-06:07:37:27:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:27:INFO] Sniff delimiter as ',' [2020-08-06:07:37:27:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:27:INFO] Sniff delimiter as ',' [2020-08-06:07:37:27:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:27:INFO] Sniff delimiter as ',' [2020-08-06:07:37:27:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:29:INFO] Sniff delimiter as ',' [2020-08-06:07:37:29:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:29:INFO] Sniff delimiter as ',' [2020-08-06:07:37:29:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:29:INFO] Sniff delimiter as ',' [2020-08-06:07:37:29:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:29:INFO] Sniff delimiter as ',' [2020-08-06:07:37:29:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:29:INFO] Sniff delimiter as ',' [2020-08-06:07:37:29:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:29:INFO] Sniff delimiter as ',' [2020-08-06:07:37:29:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:30:INFO] Sniff delimiter as ',' [2020-08-06:07:37:30:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:30:INFO] Sniff delimiter as ',' [2020-08-06:07:37:30:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:31:INFO] Sniff delimiter as ',' [2020-08-06:07:37:31:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:31:INFO] Sniff delimiter as ',' [2020-08-06:07:37:31:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:31:INFO] Sniff delimiter as ',' [2020-08-06:07:37:31:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:31:INFO] Sniff delimiter as ',' [2020-08-06:07:37:31:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:32:INFO] Sniff delimiter as ',' [2020-08-06:07:37:32:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:32:INFO] Sniff delimiter as ',' [2020-08-06:07:37:32:INFO] Sniff delimiter as ',' [2020-08-06:07:37:32:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:32:INFO] Sniff delimiter as ',' [2020-08-06:07:37:32:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:32:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:34:INFO] Sniff delimiter as ',' [2020-08-06:07:37:34:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:34:INFO] Sniff delimiter as ',' [2020-08-06:07:37:34:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:34:INFO] Sniff delimiter as ',' [2020-08-06:07:37:34:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:34:INFO] Sniff delimiter as ',' [2020-08-06:07:37:34:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:34:INFO] Sniff delimiter as ',' [2020-08-06:07:37:34:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:34:INFO] Sniff delimiter as ',' [2020-08-06:07:37:34:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:35:INFO] Sniff delimiter as ',' [2020-08-06:07:37:35:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:35:INFO] Sniff delimiter as ',' [2020-08-06:07:37:35:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:36:INFO] Sniff delimiter as ',' [2020-08-06:07:37:36:INFO] Sniff delimiter as ',' [2020-08-06:07:37:36:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:36:INFO] Sniff delimiter as ',' [2020-08-06:07:37:36:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:36:INFO] Sniff delimiter as ',' [2020-08-06:07:37:36:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:36:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:36:INFO] Sniff delimiter as ',' [2020-08-06:07:37:36:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:37:36:INFO] Sniff delimiter as ',' [2020-08-06:07:37:36:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.2 KiB (4.1 MiB/s) with 1 file(s) remaining Completed 370.2 KiB/370.2 KiB (5.9 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-574947211111/xgboost-2020-08-06-07-32-55-788/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() new_XV.shape ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ............................2020-08-06T07:51:24.498:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-08-06 07:51:24 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-06 07:51:24 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-06 07:51:24 +0000] [1] [INFO] Using worker: gevent [2020-08-06 07:51:24 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-06 07:51:24 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-06:07:51:24:INFO] Model loaded successfully for worker : 37 [2020-08-06 07:51:24 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-06:07:51:24:INFO] Model loaded successfully for worker : 38 [2020-08-06 07:51:24 +0000] [40] [INFO] Booting worker with pid: 40 Arguments: serve [2020-08-06 07:51:24 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-06 07:51:24 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-06 07:51:24 +0000] [1] [INFO] Using worker: gevent [2020-08-06 07:51:24 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-06 07:51:24 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-06:07:51:24:INFO] Model loaded successfully for worker : 37 [2020-08-06 07:51:24 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-06:07:51:24:INFO] Model loaded successfully for worker : 38 [2020-08-06 07:51:24 +0000] [40] [INFO] Booting worker with pid: 40 [2020-08-06:07:51:24:INFO] Model loaded successfully for worker : 39 [2020-08-06:07:51:24:INFO] Model loaded successfully for worker : 40 [2020-08-06:07:51:24:INFO] Model loaded successfully for worker : 39 [2020-08-06:07:51:24:INFO] Model loaded successfully for worker : 40 [2020-08-06:07:51:24:INFO] Sniff delimiter as ',' [2020-08-06:07:51:24:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:24:INFO] Sniff delimiter as ',' [2020-08-06:07:51:24:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:25:INFO] Sniff delimiter as ',' [2020-08-06:07:51:25:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:24:INFO] Sniff delimiter as ',' [2020-08-06:07:51:24:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:24:INFO] Sniff delimiter as ',' [2020-08-06:07:51:24:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:25:INFO] Sniff delimiter as ',' [2020-08-06:07:51:25:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:26:INFO] Sniff delimiter as ',' [2020-08-06:07:51:26:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:26:INFO] Sniff delimiter as ',' [2020-08-06:07:51:26:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:26:INFO] Sniff delimiter as ',' [2020-08-06:07:51:26:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:26:INFO] Sniff delimiter as ',' [2020-08-06:07:51:26:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:26:INFO] Sniff delimiter as ',' [2020-08-06:07:51:26:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:26:INFO] Sniff delimiter as ',' [2020-08-06:07:51:26:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:28:INFO] Sniff delimiter as ',' [2020-08-06:07:51:28:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:28:INFO] Sniff delimiter as ',' [2020-08-06:07:51:28:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:28:INFO] Sniff delimiter as ',' [2020-08-06:07:51:28:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:28:INFO] Sniff delimiter as ',' [2020-08-06:07:51:28:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:29:INFO] Sniff delimiter as ',' [2020-08-06:07:51:29:INFO] Sniff delimiter as ',' [2020-08-06:07:51:29:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:29:INFO] Sniff delimiter as ',' [2020-08-06:07:51:29:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:29:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:29:INFO] Sniff delimiter as ',' [2020-08-06:07:51:29:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:30:INFO] Sniff delimiter as ',' [2020-08-06:07:51:30:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:30:INFO] Sniff delimiter as ',' [2020-08-06:07:51:30:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:31:INFO] Sniff delimiter as ',' [2020-08-06:07:51:31:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:31:INFO] Sniff delimiter as ',' [2020-08-06:07:51:31:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:31:INFO] Sniff delimiter as ',' [2020-08-06:07:51:31:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:31:INFO] Sniff delimiter as ',' [2020-08-06:07:51:31:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:31:INFO] Sniff delimiter as ',' [2020-08-06:07:51:31:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:31:INFO] Sniff delimiter as ',' [2020-08-06:07:51:31:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:33:INFO] Sniff delimiter as ',' [2020-08-06:07:51:33:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:33:INFO] Sniff delimiter as ',' [2020-08-06:07:51:33:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:33:INFO] Sniff delimiter as ',' [2020-08-06:07:51:33:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:33:INFO] Sniff delimiter as ',' [2020-08-06:07:51:33:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:34:INFO] Sniff delimiter as ',' [2020-08-06:07:51:34:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:34:INFO] Sniff delimiter as ',' [2020-08-06:07:51:34:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:35:INFO] Sniff delimiter as ',' [2020-08-06:07:51:35:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:35:INFO] Sniff delimiter as ',' [2020-08-06:07:51:35:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:36:INFO] Sniff delimiter as ',' [2020-08-06:07:51:36:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:36:INFO] Sniff delimiter as ',' [2020-08-06:07:51:36:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:36:INFO] Sniff delimiter as ',' [2020-08-06:07:51:36:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:36:INFO] Sniff delimiter as ',' [2020-08-06:07:51:36:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:36:INFO] Sniff delimiter as ',' [2020-08-06:07:51:36:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:36:INFO] Sniff delimiter as ',' [2020-08-06:07:51:36:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:37:INFO] Sniff delimiter as ',' [2020-08-06:07:51:37:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:38:INFO] Sniff delimiter as ',' [2020-08-06:07:51:38:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:37:INFO] Sniff delimiter as ',' [2020-08-06:07:51:37:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:38:INFO] Sniff delimiter as ',' [2020-08-06:07:51:38:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:39:INFO] Sniff delimiter as ',' [2020-08-06:07:51:39:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:39:INFO] Sniff delimiter as ',' [2020-08-06:07:51:39:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:39:INFO] Sniff delimiter as ',' [2020-08-06:07:51:39:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:39:INFO] Sniff delimiter as ',' [2020-08-06:07:51:39:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:40:INFO] Sniff delimiter as ',' [2020-08-06:07:51:40:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:40:INFO] Sniff delimiter as ',' [2020-08-06:07:51:40:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:40:INFO] Sniff delimiter as ',' [2020-08-06:07:51:40:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:40:INFO] Sniff delimiter as ',' [2020-08-06:07:51:40:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:41:INFO] Sniff delimiter as ',' [2020-08-06:07:51:41:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:41:INFO] Sniff delimiter as ',' [2020-08-06:07:51:41:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:41:INFO] Sniff delimiter as ',' [2020-08-06:07:51:41:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:41:INFO] Sniff delimiter as ',' [2020-08-06:07:51:41:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:42:INFO] Sniff delimiter as ',' [2020-08-06:07:51:42:INFO] Sniff delimiter as ',' [2020-08-06:07:51:42:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:43:INFO] Sniff delimiter as ',' [2020-08-06:07:51:43:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:42:INFO] Determined delimiter of CSV input is ',' [2020-08-06:07:51:43:INFO] Sniff delimiter as ',' [2020-08-06:07:51:43:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/370.7 KiB (3.7 MiB/s) with 1 file(s) remaining Completed 370.7 KiB/370.7 KiB (5.3 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-574947211111/xgboost-2020-08-06-07-46-54-097/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. Using already existing model: xgboost-2020-08-06-07-20-00-171 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['cut', 'chase', 'one', 'amaz', 'intens', 'film', 'seen', 'long', 'time', 'first', 'movi', 'year', 'left', 'absolut', 'stagger', 'could', 'bare', 'feel', 'way', 'theatr', 'overwhelm', 'stare', 'screen', 'fifteen', 'minut', 'tri', 'find', 'way', 'describ', 'power', 'film', 'fail', 'highlight', 'one', 'aspect', 'documentari', 'style', 'video', 'diari', 'format', 'unflinch', 'portray', 'event', 'forc', 'charact', 'seem', 'trivialis', 'may', 'find', 'laughabl', 'killer', 'could', 'characteris', 'normal', 'killer', 'rave', 'lunat', 'foam', 'mouth', 'mani', 'quit', 'regular', 'unassum', 'peopl', 'wire', 'differ', 'perhap', 'chill', 'thought'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:507: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'victorian', 'playboy', 'reincarn', 'spill', 'ghetto', '21st', 'weari'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'dubiou', 'omin', 'optimist', 'sophi', 'orchestr', 'banana', 'masterson'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, role, train_instance_count=1, train_instance_type='ml.m4.xlarge', output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2020-08-06 08:07:13 Starting - Starting the training job... 2020-08-06 08:07:15 Starting - Launching requested ML instances......... 2020-08-06 08:08:45 Starting - Preparing the instances for training... 2020-08-06 08:09:38 Downloading - Downloading input data... 2020-08-06 08:10:12 Training - Training image download completed. Training in progress..Arguments: train [2020-08-06:08:10:13:INFO] Running standalone xgboost training. [2020-08-06:08:10:13:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8490.9mb [2020-08-06:08:10:13:INFO] Determined delimiter of CSV input is ',' [08:10:13] S3DistributionType set as FullyReplicated [08:10:15] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-08-06:08:10:15:INFO] Determined delimiter of CSV input is ',' [08:10:15] S3DistributionType set as FullyReplicated [08:10:16] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [08:10:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.313333#011validation-error:0.3174 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [08:10:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.294533#011validation-error:0.2956 [08:10:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.281267#011validation-error:0.2881 [08:10:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [3]#011train-error:0.278333#011validation-error:0.2861 [08:10:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [4]#011train-error:0.261867#011validation-error:0.2684 [08:10:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.253867#011validation-error:0.2644 [08:10:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.248133#011validation-error:0.2604 [08:10:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [7]#011train-error:0.237#011validation-error:0.2508 [08:10:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [8]#011train-error:0.235333#011validation-error:0.2491 [08:10:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [9]#011train-error:0.2326#011validation-error:0.246 [08:10:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.231#011validation-error:0.2445 [08:10:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [11]#011train-error:0.2296#011validation-error:0.2416 [08:10:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [12]#011train-error:0.2224#011validation-error:0.2338 [08:10:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 6 pruned nodes, max_depth=5 [13]#011train-error:0.2192#011validation-error:0.2301 [08:10:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.2162#011validation-error:0.2293 [08:10:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [15]#011train-error:0.210133#011validation-error:0.2252 [08:10:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [16]#011train-error:0.208667#011validation-error:0.225 [08:10:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [17]#011train-error:0.208267#011validation-error:0.2203 [08:10:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [18]#011train-error:0.2036#011validation-error:0.2175 [08:10:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [19]#011train-error:0.201933#011validation-error:0.2115 [08:10:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [20]#011train-error:0.200267#011validation-error:0.2117 [08:10:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.198#011validation-error:0.2106 [08:10:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 4 pruned nodes, max_depth=5 [22]#011train-error:0.192#011validation-error:0.2069 [08:10:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.1886#011validation-error:0.2069 [08:10:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [24]#011train-error:0.185733#011validation-error:0.2037 [08:10:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [25]#011train-error:0.183867#011validation-error:0.2019 [08:10:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [26]#011train-error:0.1814#011validation-error:0.2025 [08:10:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [27]#011train-error:0.180467#011validation-error:0.2004 [08:10:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5 [28]#011train-error:0.1788#011validation-error:0.1979 [08:10:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5 [29]#011train-error:0.1784#011validation-error:0.196 [08:10:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [30]#011train-error:0.178067#011validation-error:0.1956 [08:10:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 8 pruned nodes, max_depth=5 [31]#011train-error:0.1766#011validation-error:0.1942 [08:11:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 16 pruned nodes, max_depth=5 [32]#011train-error:0.174667#011validation-error:0.1918 [08:11:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [33]#011train-error:0.172933#011validation-error:0.1935 [08:11:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [34]#011train-error:0.170667#011validation-error:0.1904 [08:11:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.168933#011validation-error:0.187 [08:11:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.166333#011validation-error:0.1855 [08:11:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [37]#011train-error:0.164333#011validation-error:0.1842 [08:11:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.163867#011validation-error:0.1855 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ............................2020-08-06T08:16:56.568:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD Arguments: serve [2020-08-06 08:16:56 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-06 08:16:56 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-06 08:16:56 +0000] [1] [INFO] Using worker: gevent [2020-08-06 08:16:56 +0000] [36] [INFO] Booting worker with pid: 36 [2020-08-06 08:16:56 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-06:08:16:56:INFO] Model loaded successfully for worker : 36 [2020-08-06 08:16:56 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-06 08:16:56 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-06:08:16:56:INFO] Model loaded successfully for worker : 37 [2020-08-06:08:16:56:INFO] Model loaded successfully for worker : 38 [2020-08-06:08:16:56:INFO] Model loaded successfully for worker : 39 Arguments: serve [2020-08-06 08:16:56 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-06 08:16:56 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-06 08:16:56 +0000] [1] [INFO] Using worker: gevent [2020-08-06 08:16:56 +0000] [36] [INFO] Booting worker with pid: 36 [2020-08-06 08:16:56 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-06:08:16:56:INFO] Model loaded successfully for worker : 36 [2020-08-06 08:16:56 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-06 08:16:56 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-06:08:16:56:INFO] Model loaded successfully for worker : 37 [2020-08-06:08:16:56:INFO] Model loaded successfully for worker : 38 [2020-08-06:08:16:56:INFO] Model loaded successfully for worker : 39 [2020-08-06:08:16:56:INFO] Sniff delimiter as ',' [2020-08-06:08:16:56:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:16:56:INFO] Sniff delimiter as ',' [2020-08-06:08:16:56:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:16:57:INFO] Sniff delimiter as ',' [2020-08-06:08:16:57:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:16:58:INFO] Sniff delimiter as ',' [2020-08-06:08:16:58:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:16:56:INFO] Sniff delimiter as ',' [2020-08-06:08:16:56:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:16:56:INFO] Sniff delimiter as ',' [2020-08-06:08:16:56:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:16:57:INFO] Sniff delimiter as ',' [2020-08-06:08:16:57:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:16:58:INFO] Sniff delimiter as ',' [2020-08-06:08:16:58:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:16:59:INFO] Sniff delimiter as ',' [2020-08-06:08:16:59:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:16:59:INFO] Sniff delimiter as ',' [2020-08-06:08:16:59:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:16:59:INFO] Sniff delimiter as ',' [2020-08-06:08:16:59:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:16:59:INFO] Sniff delimiter as ',' [2020-08-06:08:16:59:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:16:59:INFO] Sniff delimiter as ',' [2020-08-06:08:16:59:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:16:59:INFO] Sniff delimiter as ',' [2020-08-06:08:16:59:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:01:INFO] Sniff delimiter as ',' [2020-08-06:08:17:01:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:01:INFO] Sniff delimiter as ',' [2020-08-06:08:17:01:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:01:INFO] Sniff delimiter as ',' [2020-08-06:08:17:01:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:01:INFO] Sniff delimiter as ',' [2020-08-06:08:17:01:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:01:INFO] Sniff delimiter as ',' [2020-08-06:08:17:01:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:01:INFO] Sniff delimiter as ',' [2020-08-06:08:17:01:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:03:INFO] Sniff delimiter as ',' [2020-08-06:08:17:03:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:03:INFO] Sniff delimiter as ',' [2020-08-06:08:17:03:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:05:INFO] Sniff delimiter as ',' [2020-08-06:08:17:05:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:05:INFO] Sniff delimiter as ',' [2020-08-06:08:17:05:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:05:INFO] Sniff delimiter as ',' [2020-08-06:08:17:05:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:05:INFO] Sniff delimiter as ',' [2020-08-06:08:17:05:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:05:INFO] Sniff delimiter as ',' [2020-08-06:08:17:05:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:05:INFO] Sniff delimiter as ',' [2020-08-06:08:17:05:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:05:INFO] Sniff delimiter as ',' [2020-08-06:08:17:05:INFO] Sniff delimiter as ',' [2020-08-06:08:17:05:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:05:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:07:INFO] Sniff delimiter as ',' [2020-08-06:08:17:07:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:07:INFO] Sniff delimiter as ',' [2020-08-06:08:17:07:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:07:INFO] Sniff delimiter as ',' [2020-08-06:08:17:07:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:08:INFO] Sniff delimiter as ',' [2020-08-06:08:17:08:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:07:INFO] Sniff delimiter as ',' [2020-08-06:08:17:07:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:07:INFO] Sniff delimiter as ',' [2020-08-06:08:17:07:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:07:INFO] Sniff delimiter as ',' [2020-08-06:08:17:07:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:08:INFO] Sniff delimiter as ',' [2020-08-06:08:17:08:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:10:INFO] Sniff delimiter as ',' [2020-08-06:08:17:10:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:10:INFO] Sniff delimiter as ',' [2020-08-06:08:17:10:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:10:INFO] Sniff delimiter as ',' [2020-08-06:08:17:10:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:10:INFO] Sniff delimiter as ',' [2020-08-06:08:17:10:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:10:INFO] Sniff delimiter as ',' [2020-08-06:08:17:10:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:10:INFO] Sniff delimiter as ',' [2020-08-06:08:17:10:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:10:INFO] Sniff delimiter as ',' [2020-08-06:08:17:10:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:10:INFO] Sniff delimiter as ',' [2020-08-06:08:17:10:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:15:INFO] Sniff delimiter as ',' [2020-08-06:08:17:15:INFO] Sniff delimiter as ',' [2020-08-06:08:17:15:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:15:INFO] Sniff delimiter as ',' [2020-08-06:08:17:15:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:15:INFO] Sniff delimiter as ',' [2020-08-06:08:17:15:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:15:INFO] Sniff delimiter as ',' [2020-08-06:08:17:15:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:15:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:15:INFO] Sniff delimiter as ',' [2020-08-06:08:17:15:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:15:INFO] Sniff delimiter as ',' [2020-08-06:08:17:15:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:15:INFO] Sniff delimiter as ',' [2020-08-06:08:17:15:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:17:INFO] Sniff delimiter as ',' [2020-08-06:08:17:17:INFO] Sniff delimiter as ',' [2020-08-06:08:17:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:17:INFO] Sniff delimiter as ',' [2020-08-06:08:17:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:17:INFO] Sniff delimiter as ',' [2020-08-06:08:17:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:17:INFO] Sniff delimiter as ',' [2020-08-06:08:17:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:17:INFO] Sniff delimiter as ',' [2020-08-06:08:17:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:17:INFO] Sniff delimiter as ',' [2020-08-06:08:17:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:17:INFO] Sniff delimiter as ',' [2020-08-06:08:17:17:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:19:INFO] Sniff delimiter as ',' [2020-08-06:08:17:19:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:19:INFO] Sniff delimiter as ',' [2020-08-06:08:17:19:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:19:INFO] Sniff delimiter as ',' [2020-08-06:08:17:19:INFO] Determined delimiter of CSV input is ',' [2020-08-06:08:17:19:INFO] Sniff delimiter as ',' [2020-08-06:08:17:19:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/365.7 KiB (3.3 MiB/s) with 1 file(s) remaining Completed 365.7 KiB/365.7 KiB (4.6 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-574947211111/xgboost-2020-08-06-08-12-27-074/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-update-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. # new_xgb_endpoint_config_info = None # Solution: new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "XGB-Model" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint(EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output -------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2020-07-21 03:47:18-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 24.1MB/s in 4.1s 2020-07-21 03:47:22 (19.5 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary train_X[0] # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) train_X[0].size len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output WARNING:root:Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-07-21 01:29:20 Starting - Starting the training job... 2020-07-21 01:29:23 Starting - Launching requested ML instances...... 2020-07-21 01:30:27 Starting - Preparing the instances for training...... 2020-07-21 01:31:50 Downloading - Downloading input data 2020-07-21 01:31:50 Training - Downloading the training image... 2020-07-21 01:32:10 Training - Training image download completed. Training in progress.Arguments: train [2020-07-21:01:32:10:INFO] Running standalone xgboost training. [2020-07-21:01:32:10:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8481.2mb [2020-07-21:01:32:10:INFO] Determined delimiter of CSV input is ',' [01:32:10] S3DistributionType set as FullyReplicated [01:32:12] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-07-21:01:32:12:INFO] Determined delimiter of CSV input is ',' [01:32:12] S3DistributionType set as FullyReplicated [01:32:13] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [01:32:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [0]#011train-error:0.301867#011validation-error:0.2999 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [01:32:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.285467#011validation-error:0.2804 [01:32:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.273333#011validation-error:0.2732 [01:32:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.270667#011validation-error:0.2713 [01:32:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [4]#011train-error:0.263#011validation-error:0.2639 [01:32:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 0 pruned nodes, max_depth=5 [5]#011train-error:0.256333#011validation-error:0.2556 [01:32:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [6]#011train-error:0.244267#011validation-error:0.2439 [01:32:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.2436#011validation-error:0.2467 [01:32:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.238333#011validation-error:0.2405 [01:32:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [9]#011train-error:0.2306#011validation-error:0.2343 [01:32:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.225067#011validation-error:0.2309 [01:32:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [11]#011train-error:0.219267#011validation-error:0.2233 [01:32:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [12]#011train-error:0.215067#011validation-error:0.2191 [01:32:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [13]#011train-error:0.209333#011validation-error:0.2159 [01:32:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [14]#011train-error:0.205733#011validation-error:0.2126 [01:32:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [15]#011train-error:0.2006#011validation-error:0.2093 [01:32:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [16]#011train-error:0.199067#011validation-error:0.2065 [01:32:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 4 pruned nodes, max_depth=5 [17]#011train-error:0.195867#011validation-error:0.2022 [01:32:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.192133#011validation-error:0.2004 [01:32:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.1902#011validation-error:0.1968 [01:32:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [20]#011train-error:0.1866#011validation-error:0.1946 [01:32:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [21]#011train-error:0.184533#011validation-error:0.1945 [01:32:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 12 pruned nodes, max_depth=5 [22]#011train-error:0.182333#011validation-error:0.1913 [01:32:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.1792#011validation-error:0.1896 [01:32:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5 [24]#011train-error:0.176333#011validation-error:0.1885 [01:32:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [25]#011train-error:0.173267#011validation-error:0.1877 [01:32:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [26]#011train-error:0.171467#011validation-error:0.1851 [01:32:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [27]#011train-error:0.169867#011validation-error:0.1848 [01:32:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [28]#011train-error:0.167667#011validation-error:0.1833 [01:32:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5 [29]#011train-error:0.165733#011validation-error:0.1822 [01:32:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 2 pruned nodes, max_depth=5 [30]#011train-error:0.163933#011validation-error:0.179 [01:32:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [31]#011train-error:0.162467#011validation-error:0.1783 [01:32:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 2 pruned nodes, max_depth=5 [32]#011train-error:0.1612#011validation-error:0.1763 [01:33:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.161333#011validation-error:0.1774 [01:33:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [34]#011train-error:0.16#011validation-error:0.1761 [01:33:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.1592#011validation-error:0.1752 [01:33:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5 [36]#011train-error:0.157933#011validation-error:0.1738 [01:33:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5 [37]#011train-error:0.1568#011validation-error:0.1705 [01:33:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 12 pruned nodes, max_depth=5 [38]#011train-error:0.156467#011validation-error:0.1698 [01:33:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [39]#011train-error:0.1546#011validation-error:0.1675 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output WARNING:sagemaker:Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output ......................Arguments: serve [2020-07-21 01:38:38 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-07-21 01:38:38 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-07-21 01:38:38 +0000] [1] [INFO] Using worker: gevent [2020-07-21 01:38:38 +0000] [37] [INFO] Booting worker with pid: 37 [2020-07-21 01:38:38 +0000] [38] [INFO] Booting worker with pid: 38 [2020-07-21 01:38:38 +0000] [39] [INFO] Booting worker with pid: 39 [2020-07-21 01:38:38 +0000] [40] [INFO] Booting worker with pid: 40 [2020-07-21:01:38:38:INFO] Model loaded successfully for worker : 38 [2020-07-21:01:38:38:INFO] Model loaded successfully for worker : 37 [2020-07-21:01:38:38:INFO] Model loaded successfully for worker : 39 [2020-07-21:01:38:38:INFO] Model loaded successfully for worker : 40 2020-07-21T01:38:46.713:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-07-21:01:38:49:INFO] Sniff delimiter as ',' [2020-07-21:01:38:49:INFO] Sniff delimiter as ',' [2020-07-21:01:38:49:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:49:INFO] Sniff delimiter as ',' [2020-07-21:01:38:49:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:49:INFO] Sniff delimiter as ',' [2020-07-21:01:38:49:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:49:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:49:INFO] Sniff delimiter as ',' [2020-07-21:01:38:49:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:49:INFO] Sniff delimiter as ',' [2020-07-21:01:38:49:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:51:INFO] Sniff delimiter as ',' [2020-07-21:01:38:51:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:51:INFO] Sniff delimiter as ',' [2020-07-21:01:38:51:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:52:INFO] Sniff delimiter as ',' [2020-07-21:01:38:52:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:52:INFO] Sniff delimiter as ',' [2020-07-21:01:38:52:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:52:INFO] Sniff delimiter as ',' [2020-07-21:01:38:52:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:52:INFO] Sniff delimiter as ',' [2020-07-21:01:38:52:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:52:INFO] Sniff delimiter as ',' [2020-07-21:01:38:52:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:52:INFO] Sniff delimiter as ',' [2020-07-21:01:38:52:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:52:INFO] Sniff delimiter as ',' [2020-07-21:01:38:52:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:52:INFO] Sniff delimiter as ',' [2020-07-21:01:38:52:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:57:INFO] Sniff delimiter as ',' [2020-07-21:01:38:57:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:57:INFO] Sniff delimiter as ',' [2020-07-21:01:38:57:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:57:INFO] Sniff delimiter as ',' [2020-07-21:01:38:57:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:57:INFO] Sniff delimiter as ',' [2020-07-21:01:38:57:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:57:INFO] Sniff delimiter as ',' [2020-07-21:01:38:57:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:57:INFO] Sniff delimiter as ',' [2020-07-21:01:38:57:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:57:INFO] Sniff delimiter as ',' [2020-07-21:01:38:57:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:57:INFO] Sniff delimiter as ',' [2020-07-21:01:38:57:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:59:INFO] Sniff delimiter as ',' [2020-07-21:01:38:59:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:59:INFO] Sniff delimiter as ',' [2020-07-21:01:38:59:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:59:INFO] Sniff delimiter as ',' [2020-07-21:01:38:59:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:38:59:INFO] Sniff delimiter as ',' [2020-07-21:01:38:59:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:02:INFO] Sniff delimiter as ',' [2020-07-21:01:39:02:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:02:INFO] Sniff delimiter as ',' [2020-07-21:01:39:02:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:02:INFO] Sniff delimiter as ',' [2020-07-21:01:39:02:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:02:INFO] Sniff delimiter as ',' [2020-07-21:01:39:02:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:02:INFO] Sniff delimiter as ',' [2020-07-21:01:39:02:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:02:INFO] Sniff delimiter as ',' [2020-07-21:01:39:02:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:02:INFO] Sniff delimiter as ',' [2020-07-21:01:39:02:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:02:INFO] Sniff delimiter as ',' [2020-07-21:01:39:02:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:04:INFO] Sniff delimiter as ',' [2020-07-21:01:39:04:INFO] Sniff delimiter as ',' [2020-07-21:01:39:04:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:04:INFO] Sniff delimiter as ',' [2020-07-21:01:39:04:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:04:INFO] Sniff delimiter as ',' [2020-07-21:01:39:04:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:04:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:04:INFO] Sniff delimiter as ',' [2020-07-21:01:39:04:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:04:INFO] Sniff delimiter as ',' [2020-07-21:01:39:04:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:05:INFO] Sniff delimiter as ',' [2020-07-21:01:39:05:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:05:INFO] Sniff delimiter as ',' [2020-07-21:01:39:05:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:07:INFO] Sniff delimiter as ',' [2020-07-21:01:39:07:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:07:INFO] Sniff delimiter as ',' [2020-07-21:01:39:07:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:07:INFO] Sniff delimiter as ',' [2020-07-21:01:39:07:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:07:INFO] Sniff delimiter as ',' [2020-07-21:01:39:07:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:07:INFO] Sniff delimiter as ',' [2020-07-21:01:39:07:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:07:INFO] Sniff delimiter as ',' [2020-07-21:01:39:07:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:07:INFO] Sniff delimiter as ',' [2020-07-21:01:39:07:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:07:INFO] Sniff delimiter as ',' [2020-07-21:01:39:07:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:09:INFO] Sniff delimiter as ',' [2020-07-21:01:39:09:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:09:INFO] Sniff delimiter as ',' [2020-07-21:01:39:09:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:09:INFO] Sniff delimiter as ',' [2020-07-21:01:39:09:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:09:INFO] Sniff delimiter as ',' [2020-07-21:01:39:09:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:09:INFO] Sniff delimiter as ',' [2020-07-21:01:39:09:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:09:INFO] Sniff delimiter as ',' [2020-07-21:01:39:09:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:10:INFO] Sniff delimiter as ',' [2020-07-21:01:39:10:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:10:INFO] Sniff delimiter as ',' [2020-07-21:01:39:10:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:11:INFO] Sniff delimiter as ',' [2020-07-21:01:39:11:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:11:INFO] Sniff delimiter as ',' [2020-07-21:01:39:11:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:12:INFO] Sniff delimiter as ',' [2020-07-21:01:39:12:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:12:INFO] Sniff delimiter as ',' [2020-07-21:01:39:12:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:11:INFO] Sniff delimiter as ',' [2020-07-21:01:39:11:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:11:INFO] Sniff delimiter as ',' [2020-07-21:01:39:11:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:12:INFO] Sniff delimiter as ',' [2020-07-21:01:39:12:INFO] Determined delimiter of CSV input is ',' [2020-07-21:01:39:12:INFO] Sniff delimiter as ',' [2020-07-21:01:39:12:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/367.9 KiB (3.6 MiB/s) with 1 file(s) remaining Completed 367.9 KiB/367.9 KiB (5.1 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-2-537234121179/xgboost-2020-07-21-01-35-05-161/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary = vocabulary,preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.fit_transform(new_X).toarray() new_XV.shape ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv import pandas as pd data_dir = '../data/sentiment_update' pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) #pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code xgb = sagemaker.estimator.Estimator.attach('xgboost-2020-07-21-01-29-20-778') # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output ....................Arguments: serve [2020-07-21 04:01:44 +0000] [1] [INFO] Starting gunicorn 19.7.1 Arguments: serve [2020-07-21 04:01:44 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-07-21 04:01:44 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-07-21 04:01:44 +0000] [1] [INFO] Using worker: gevent [2020-07-21 04:01:44 +0000] [37] [INFO] Booting worker with pid: 37 [2020-07-21 04:01:44 +0000] [38] [INFO] Booting worker with pid: 38 [2020-07-21 04:01:44 +0000] [39] [INFO] Booting worker with pid: 39 [2020-07-21 04:01:44 +0000] [40] [INFO] Booting worker with pid: 40 [2020-07-21:04:01:44:INFO] Model loaded successfully for worker : 39 [2020-07-21:04:01:44:INFO] Model loaded successfully for worker : 37 [2020-07-21:04:01:44:INFO] Model loaded successfully for worker : 38 [2020-07-21:04:01:44:INFO] Model loaded successfully for worker : 40 [2020-07-21 04:01:44 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-07-21 04:01:44 +0000] [1] [INFO] Using worker: gevent [2020-07-21 04:01:44 +0000] [37] [INFO] Booting worker with pid: 37 [2020-07-21 04:01:44 +0000] [38] [INFO] Booting worker with pid: 38 [2020-07-21 04:01:44 +0000] [39] [INFO] Booting worker with pid: 39 [2020-07-21 04:01:44 +0000] [40] [INFO] Booting worker with pid: 40 [2020-07-21:04:01:44:INFO] Model loaded successfully for worker : 39 [2020-07-21:04:01:44:INFO] Model loaded successfully for worker : 37 [2020-07-21:04:01:44:INFO] Model loaded successfully for worker : 38 [2020-07-21:04:01:44:INFO] Model loaded successfully for worker : 40 [2020-07-21:04:01:58:INFO] Sniff delimiter as ',' [2020-07-21:04:01:58:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:01:58:INFO] Sniff delimiter as ',' [2020-07-21:04:01:58:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:01:58:INFO] Sniff delimiter as ',' [2020-07-21:04:01:58:INFO] Sniff delimiter as ',' [2020-07-21:04:01:58:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:01:59:INFO] Sniff delimiter as ',' [2020-07-21:04:01:59:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:01:59:INFO] Sniff delimiter as ',' [2020-07-21:04:01:59:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:01:58:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:01:59:INFO] Sniff delimiter as ',' [2020-07-21:04:01:59:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:01:59:INFO] Sniff delimiter as ',' [2020-07-21:04:01:59:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:01:INFO] Sniff delimiter as ',' [2020-07-21:04:02:01:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:01:INFO] Sniff delimiter as ',' [2020-07-21:04:02:01:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:01:INFO] Sniff delimiter as ',' [2020-07-21:04:02:01:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:01:INFO] Sniff delimiter as ',' [2020-07-21:04:02:01:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:01:INFO] Sniff delimiter as ',' [2020-07-21:04:02:01:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:01:INFO] Sniff delimiter as ',' [2020-07-21:04:02:01:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:01:INFO] Sniff delimiter as ',' [2020-07-21:04:02:01:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:01:INFO] Sniff delimiter as ',' [2020-07-21:04:02:01:INFO] Determined delimiter of CSV input is ',' 2020-07-21T04:01:56.096:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-07-21:04:02:06:INFO] Sniff delimiter as ',' [2020-07-21:04:02:06:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:06:INFO] Sniff delimiter as ',' [2020-07-21:04:02:06:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:06:INFO] Sniff delimiter as ',' [2020-07-21:04:02:06:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:06:INFO] Sniff delimiter as ',' [2020-07-21:04:02:06:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:06:INFO] Sniff delimiter as ',' [2020-07-21:04:02:06:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:06:INFO] Sniff delimiter as ',' [2020-07-21:04:02:06:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:06:INFO] Sniff delimiter as ',' [2020-07-21:04:02:06:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:06:INFO] Sniff delimiter as ',' [2020-07-21:04:02:06:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:08:INFO] Sniff delimiter as ',' [2020-07-21:04:02:08:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:08:INFO] Sniff delimiter as ',' [2020-07-21:04:02:08:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:08:INFO] Sniff delimiter as ',' [2020-07-21:04:02:08:INFO] Sniff delimiter as ',' [2020-07-21:04:02:08:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:08:INFO] Sniff delimiter as ',' [2020-07-21:04:02:08:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:08:INFO] Sniff delimiter as ',' [2020-07-21:04:02:08:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:08:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:08:INFO] Sniff delimiter as ',' [2020-07-21:04:02:08:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:08:INFO] Sniff delimiter as ',' [2020-07-21:04:02:08:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:10:INFO] Sniff delimiter as ',' [2020-07-21:04:02:10:INFO] Sniff delimiter as ',' [2020-07-21:04:02:10:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:11:INFO] Sniff delimiter as ',' [2020-07-21:04:02:11:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:11:INFO] Sniff delimiter as ',' [2020-07-21:04:02:11:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:11:INFO] Sniff delimiter as ',' [2020-07-21:04:02:11:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:10:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:11:INFO] Sniff delimiter as ',' [2020-07-21:04:02:11:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:11:INFO] Sniff delimiter as ',' [2020-07-21:04:02:11:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:11:INFO] Sniff delimiter as ',' [2020-07-21:04:02:11:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:14:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:14:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:15:INFO] Sniff delimiter as ',' [2020-07-21:04:02:15:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:15:INFO] Sniff delimiter as ',' [2020-07-21:04:02:15:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:15:INFO] Sniff delimiter as ',' [2020-07-21:04:02:15:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:15:INFO] Sniff delimiter as ',' [2020-07-21:04:02:15:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:15:INFO] Sniff delimiter as ',' [2020-07-21:04:02:15:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:15:INFO] Sniff delimiter as ',' [2020-07-21:04:02:15:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:17:INFO] Sniff delimiter as ',' [2020-07-21:04:02:17:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:17:INFO] Sniff delimiter as ',' [2020-07-21:04:02:17:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:17:INFO] Sniff delimiter as ',' [2020-07-21:04:02:17:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:17:INFO] Sniff delimiter as ',' [2020-07-21:04:02:17:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:18:INFO] Sniff delimiter as ',' [2020-07-21:04:02:18:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:18:INFO] Sniff delimiter as ',' [2020-07-21:04:02:18:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:18:INFO] Sniff delimiter as ',' [2020-07-21:04:02:18:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:18:INFO] Sniff delimiter as ',' [2020-07-21:04:02:18:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:18:INFO] Sniff delimiter as ',' [2020-07-21:04:02:18:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:18:INFO] Sniff delimiter as ',' [2020-07-21:04:02:18:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:20:INFO] Sniff delimiter as ',' [2020-07-21:04:02:20:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:20:INFO] Sniff delimiter as ',' [2020-07-21:04:02:20:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:20:INFO] Sniff delimiter as ',' [2020-07-21:04:02:20:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:20:INFO] Sniff delimiter as ',' [2020-07-21:04:02:20:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:20:INFO] Sniff delimiter as ',' [2020-07-21:04:02:20:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:20:INFO] Sniff delimiter as ',' [2020-07-21:04:02:20:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:22:INFO] Sniff delimiter as ',' [2020-07-21:04:02:22:INFO] Determined delimiter of CSV input is ',' [2020-07-21:04:02:22:INFO] Sniff delimiter as ',' [2020-07-21:04:02:22:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-2-537234121179/xgboost-2020-07-21-01-29-20-778-2020-07-21-03-58-21-115/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code from sklearn.metrics import accuracy_score accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output WARNING:sagemaker:Parameter image will be renamed to image_uri in SageMaker Python SDK v2. WARNING:sagemaker:Using already existing model: xgboost-2020-07-21-01-29-20-778 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['may', 'day', '1938', 'happen', 'huge', 'ralli', 'celebr', 'hitler', 'visit', 'rome', 'serv', 'backdrop', 'love', 'stori', 'antoniett', 'sophia', 'loren', 'marri', 'fascist', 'john', 'vernon', 'gabriel', 'marcello', 'mastroianni', 'bore', 'housewif', 'sever', 'son', 'unhappi', 'solitari', 'homosexu', 'fire', 'radio', 'pursu', 'fascist', 'left', 'alon', 'home', 'spous', 'must', 'attend', 'histor', 'celebr', 'develop', 'enjoy', 'relationship', 'spite', 'differ', 'film', 'set', 'histor', 'meet', 'fuher', 'hitler', 'duce', 'mussolini', 'along', 'other', 'author', 'count', 'ciano', 'king', 'victor', 'manuel', 'iii', 'describ', 'event', 'radio', 'voic', 'sometim', 'irrit', 'romant', 'drama', 'carri', 'sens', 'sensibl', 'unrelentingli', 'passion', 'romanc', 'two', 'conflict', 'charact', 'magnific', 'perform', 'two', 'pro', 'make', 'splendid', 'movi', 'well', 'worth', 'see', 'cours', 'ruggero', 'macarri', 'ettor', 'scola', 'sensibl', 'screenplay', 'result', 'ever', 'interest', 'elabor', 'sentiment', 'color', 'atmospher', 'cinematographi', 'pascualino', 'de', 'santi', 'emot', 'music', 'score', 'armando', 'trovajoli', 'sensit', 'leitmotif', 'film', 'deservedli', 'golden', 'globe', '1978', 'best', 'foreign', 'film', 'director', 'scola', 'imagin', 'stretch', 'light', 'limit', 'scenario', 'develop', 'drama', 'usual', 'film', 'take', 'place', 'stage', 'semi', 'theatric', 'exampl', 'le', 'bal', '1982', 'use', 'french', 'danc', 'hall', 'illustr', 'chang', 'societi', '2', 'nuit', 'varenn', '1983', 'stagecoach', 'scenario', 'meet', 'unlik', 'group', 'thoma', 'pain', 'lui', 'xvi', 'mari', 'antoinett', 'fled', 'revolutionari', 'pari', '3', 'famili', '1987', 'take', 'place', 'famili', 'grand', 'old', 'roman', 'flat', 'cours', '4', 'una', 'giornata', 'particular', 'special', 'day', 'loren', 'mastroianni', 'strike', 'marvel', 'relationship', 'respect', 'apart', 'flat', 'roof', 'banana'], 0) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/feature_extraction/text.py:507: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None' warnings.warn("The parameter 'token_pattern' will not be used" ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'ghetto', '21st', 'weari', 'victorian', 'spill', 'reincarn', 'playboy'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'omin', 'orchestr', 'optimist', 'sophi', 'dubiou', 'banana', 'masterson'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. # First we create a SageMaker estimator object for our model. # And then set the algorithm specific parameters. new_xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output WARNING:root:Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None s3_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2020-07-21 04:30:12 Starting - Starting the training job... 2020-07-21 04:30:14 Starting - Launching requested ML instances......... 2020-07-21 04:31:44 Starting - Preparing the instances for training...... 2020-07-21 04:33:01 Downloading - Downloading input data 2020-07-21 04:33:01 Training - Downloading the training image..Arguments: train [2020-07-21:04:33:22:INFO] Running standalone xgboost training. [2020-07-21:04:33:22:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8476.2mb [2020-07-21:04:33:22:INFO] Determined delimiter of CSV input is ',' [04:33:22] S3DistributionType set as FullyReplicated [04:33:24] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2020-07-21:04:33:24:INFO] Determined delimiter of CSV input is ',' [04:33:24] S3DistributionType set as FullyReplicated [04:33:25] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, 2020-07-21 04:33:21 Training - Training image download completed. Training in progress.[04:33:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.316467#011validation-error:0.3136 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [04:33:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 4 pruned nodes, max_depth=5 [1]#011train-error:0.2966#011validation-error:0.2937 [04:33:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [2]#011train-error:0.281733#011validation-error:0.2815 [04:33:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [3]#011train-error:0.275333#011validation-error:0.2792 [04:33:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [4]#011train-error:0.273533#011validation-error:0.2754 [04:33:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [5]#011train-error:0.259867#011validation-error:0.2633 [04:33:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5 [6]#011train-error:0.2618#011validation-error:0.2614 [04:33:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 8 pruned nodes, max_depth=5 [7]#011train-error:0.254533#011validation-error:0.2565 [04:33:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.245733#011validation-error:0.2496 [04:33:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [9]#011train-error:0.2424#011validation-error:0.2467 [04:33:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.2372#011validation-error:0.2446 [04:33:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 8 pruned nodes, max_depth=5 [11]#011train-error:0.229667#011validation-error:0.2397 [04:33:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [12]#011train-error:0.2234#011validation-error:0.2311 [04:33:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 6 pruned nodes, max_depth=5 [13]#011train-error:0.220867#011validation-error:0.2273 [04:33:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 2 pruned nodes, max_depth=5 [14]#011train-error:0.215333#011validation-error:0.2198 [04:33:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [15]#011train-error:0.212533#011validation-error:0.2185 [04:33:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.210867#011validation-error:0.2157 [04:33:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 2 pruned nodes, max_depth=5 [17]#011train-error:0.205#011validation-error:0.2124 [04:33:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [18]#011train-error:0.201467#011validation-error:0.2122 [04:33:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [19]#011train-error:0.197733#011validation-error:0.2105 [04:33:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [20]#011train-error:0.194867#011validation-error:0.2081 [04:33:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.193333#011validation-error:0.208 [04:33:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 10 pruned nodes, max_depth=5 [22]#011train-error:0.193533#011validation-error:0.2066 [04:33:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [23]#011train-error:0.189133#011validation-error:0.203 [04:33:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [24]#011train-error:0.185133#011validation-error:0.1997 [04:34:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 10 pruned nodes, max_depth=5 [25]#011train-error:0.183133#011validation-error:0.1987 [04:34:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [26]#011train-error:0.1834#011validation-error:0.1965 [04:34:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 14 pruned nodes, max_depth=5 [27]#011train-error:0.1818#011validation-error:0.1965 [04:34:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [28]#011train-error:0.179733#011validation-error:0.1959 [04:34:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [29]#011train-error:0.1784#011validation-error:0.1933 [04:34:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [30]#011train-error:0.1764#011validation-error:0.1934 [04:34:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [31]#011train-error:0.175533#011validation-error:0.1932 [04:34:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [32]#011train-error:0.1738#011validation-error:0.1918 [04:34:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [33]#011train-error:0.172867#011validation-error:0.1903 [04:34:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [34]#011train-error:0.172067#011validation-error:0.1876 [04:34:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [35]#011train-error:0.171733#011validation-error:0.1876 [04:34:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [36]#011train-error:0.169333#011validation-error:0.1868 [04:34:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 12 pruned nodes, max_depth=5 [37]#011train-error:0.167533#011validation-error:0.1868 [04:34:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [38]#011train-error:0.167#011validation-error:0.1858 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') ###Output WARNING:sagemaker:Parameter image will be renamed to image_uri in SageMaker Python SDK v2. ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.wait() ###Output ............. ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() len(test_X[0]) ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb = sagemaker.estimator.Estimator.attach('xgboost-2020-07-21-04-30-12-722') new_xgb_transformer = new_xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir import pandas as pd predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # # And then we ask SageMaker to construct the endpoint configuration # endpoint_config_info = session.sagemaker_client.create_endpoint_config( # EndpointConfigName = endpoint_config_name, # ProductionVariants = [{ # "InstanceType": "ml.m4.xlarge", # "InitialVariantWeight": 1, # "InitialInstanceCount": 1, # "ModelName": model_name, # "VariantName": "AllTraffic" # }]) # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "updated-xgboost-endpoint-movie-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName = new_xgb_endpoint_config_name, ProductionVariants = [{ "InstanceType": "ml.m4.xlarge", "InitialVariantWeight": 1, "InitialInstanceCount": 1, "ModelName": new_xgb_transformer.model_name, "VariantName": "AllTraffic" }]) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. endpoint_name = 'xgboost-2020-07-21-01-29-20-778'#"xgboost-movie-update-endpoint-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # And then we can deploy our endpoint endpoint_info = session.sagemaker_client.update_endpoint( EndpointName = endpoint_name, EndpointConfigName = new_xgb_endpoint_config_name) endpoint_dec = session.wait_for_endpoint(endpoint_name) ###Output -------------! ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code session.sagemaker_client.delete_endpoint(EndpointName = 'xgboost-2020-07-21-01-29-20-778') xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2019-05-09 19:48:39-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 28.1MB/s in 2.9s 2019-05-09 19:48:42 (28.1 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output _____no_output_____ ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output _____no_output_____ ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output _____no_output_____ ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = None # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = None ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = None ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. ###Output _____no_output_____ ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = None ###Output _____no_output_____ ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output _____no_output_____ ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output _____no_output_____ ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output _____no_output_____ ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = None new_val_location = None new_train_location = None ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = None # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = None s3_new_input_validation = None # TODO: Using the new validation and training data, 'fit' your new model. ###Output _____no_output_____ ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = None ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. ###Output _____no_output_____ ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output _____no_output_____ ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output _____no_output_____ ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = None ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = None # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = None ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output _____no_output_____ ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____ ###Markdown Sentiment Analysis Updating a Model in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this notebook we will consider a situation in which a model that we constructed is no longer working as we intended. In particular, we will look at the XGBoost sentiment analysis model that we constructed earlier. In this case, however, we have some new data that our model doesn't seem to perform very well on. As a result, we will re-train our model and update an existing endpoint so that it uses our new model.This notebook starts by re-creating the XGBoost sentiment analysis model that was created in earlier notebooks. This means that you will have already seen the cells up to the end of Step 4. The new content in this notebook begins at Step 5. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset. ###Code %mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data ###Output mkdir: cannot create directory ‘../data’: File exists --2019-10-17 20:53:48-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 22.6MB/s in 4.9s 2019-10-17 20:53:53 (16.2 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825] ###Markdown Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing. ###Code import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100] ###Output _____no_output_____ ###Markdown Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data. ###Code import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words review_to_words(train_X[100]) import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y) ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation. ###Code import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.externals import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X) len(train_X[100]) ###Output _____no_output_____ ###Markdown Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) ###Output _____no_output_____ ###Markdown The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code # First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/sentiment_update' if not os.path.exists(data_dir): os.makedirs(data_dir) pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. test_X = train_X = val_X = train_y = val_y = None ###Output _____no_output_____ ###Markdown Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__ ###Code import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-update' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Creating the XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. To begin with, we need to do some setup. At this point it is worth discussing what a model is in SageMaker. It is easiest to think of a model of comprising three different objects in the SageMaker ecosystem, which interact with one another.- Model Artifacts- Training Code (Container)- Inference Code (Container)The Model Artifacts are what you might think of as the actual model itself. For example, if you were building a neural network, the model artifacts would be the weights of the various layers. In our case, for an XGBoost model, the artifacts are the actual trees that are created during training.The other two objects, the training code and the inference code are then used the manipulate the training artifacts. More precisely, the training code uses the training data that is provided and creates the model artifacts, while the inference code uses the model artifacts to make predictions on new data.The way that SageMaker runs the training and inference code is by making use of Docker containers. For now, think of a container as being a way of packaging code up so that dependencies aren't an issue. ###Code from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # First we create a SageMaker estimator object for our model. xgb = sagemaker.estimator.Estimator(container, # The location of the container we wish to use role, # What is our current IAM Role train_instance_count=1, # How many compute instances train_instance_type='ml.m4.xlarge', # What kind of compute instances output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # And then set the algorithm specific parameters. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500) ###Output _____no_output_____ ###Markdown Fit the XGBoost modelNow that our model has been set up we simply need to attach the training and validation datasets and then ask SageMaker to set up the computation. ###Code s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ###Output 2019-10-17 21:03:11 Starting - Starting the training job... 2019-10-17 21:03:13 Starting - Launching requested ML instances......... 2019-10-17 21:04:43 Starting - Preparing the instances for training... 2019-10-17 21:05:42 Downloading - Downloading input data...... 2019-10-17 21:06:35 Training - Training image download completed. Training in progress..Arguments: train [2019-10-17:21:06:36:INFO] Running standalone xgboost training. [2019-10-17:21:06:36:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8610.26mb [2019-10-17:21:06:36:INFO] Determined delimiter of CSV input is ',' [21:06:36] S3DistributionType set as FullyReplicated [21:06:38] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2019-10-17:21:06:38:INFO] Determined delimiter of CSV input is ',' [21:06:38] S3DistributionType set as FullyReplicated [21:06:39] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [21:06:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 6 pruned nodes, max_depth=5 [0]#011train-error:0.301667#011validation-error:0.2985 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [21:06:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 0 pruned nodes, max_depth=5 [1]#011train-error:0.283733#011validation-error:0.2793 [21:06:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [2]#011train-error:0.272933#011validation-error:0.2704 [21:06:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 0 pruned nodes, max_depth=5 [3]#011train-error:0.272933#011validation-error:0.2701 [21:06:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [4]#011train-error:0.259#011validation-error:0.257 [21:06:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 2 pruned nodes, max_depth=5 [5]#011train-error:0.253733#011validation-error:0.2539 [21:06:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 2 pruned nodes, max_depth=5 [6]#011train-error:0.246933#011validation-error:0.2483 [21:06:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [7]#011train-error:0.2322#011validation-error:0.2333 [21:06:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [8]#011train-error:0.234867#011validation-error:0.2388 [21:06:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [9]#011train-error:0.227133#011validation-error:0.2315 [21:06:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [10]#011train-error:0.221467#011validation-error:0.2279 [21:06:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [11]#011train-error:0.217467#011validation-error:0.2229 [21:06:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.211733#011validation-error:0.2169 [21:06:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [13]#011train-error:0.208133#011validation-error:0.2141 [21:07:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [14]#011train-error:0.204867#011validation-error:0.2108 [21:07:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 2 pruned nodes, max_depth=5 [15]#011train-error:0.203133#011validation-error:0.2066 [21:07:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 8 pruned nodes, max_depth=5 [16]#011train-error:0.1992#011validation-error:0.2047 [21:07:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [17]#011train-error:0.1966#011validation-error:0.204 [21:07:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 10 pruned nodes, max_depth=5 [18]#011train-error:0.194267#011validation-error:0.2026 [21:07:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [19]#011train-error:0.1904#011validation-error:0.1983 [21:07:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 10 pruned nodes, max_depth=5 [20]#011train-error:0.186867#011validation-error:0.1962 [21:07:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [21]#011train-error:0.184067#011validation-error:0.1954 [21:07:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [22]#011train-error:0.181533#011validation-error:0.1942 [21:07:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [23]#011train-error:0.180667#011validation-error:0.1932 [21:07:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 4 pruned nodes, max_depth=5 [24]#011train-error:0.177533#011validation-error:0.1916 [21:07:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 10 pruned nodes, max_depth=5 [25]#011train-error:0.176667#011validation-error:0.1892 [21:07:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 4 pruned nodes, max_depth=5 [26]#011train-error:0.174467#011validation-error:0.1853 [21:07:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 4 pruned nodes, max_depth=5 [27]#011train-error:0.172133#011validation-error:0.1833 [21:07:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 2 pruned nodes, max_depth=5 [28]#011train-error:0.17#011validation-error:0.1827 [21:07:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [29]#011train-error:0.168133#011validation-error:0.1837 [21:07:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5 [30]#011train-error:0.165533#011validation-error:0.1818 [21:07:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.164133#011validation-error:0.1816 [21:07:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5 [32]#011train-error:0.162733#011validation-error:0.1799 [21:07:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 4 pruned nodes, max_depth=5 [33]#011train-error:0.161733#011validation-error:0.1777 [21:07:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [34]#011train-error:0.160733#011validation-error:0.177 [21:07:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5 [35]#011train-error:0.159267#011validation-error:0.1752 [21:07:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 4 pruned nodes, max_depth=5 [36]#011train-error:0.1588#011validation-error:0.1735 [21:07:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [37]#011train-error:0.157133#011validation-error:0.1722 [21:07:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=5 [38]#011train-error:0.1572#011validation-error:0.1721 [21:07:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 6 pruned nodes, max_depth=5 [39]#011train-error:0.1562#011validation-error:0.1711 [21:07:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 6 pruned nodes, max_depth=5 [40]#011train-error:0.1542#011validation-error:0.1704 [21:07:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 18 pruned nodes, max_depth=5 [41]#011train-error:0.153333#011validation-error:0.1704 [21:07:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 4 pruned nodes, max_depth=5 [42]#011train-error:0.152533#011validation-error:0.1695 [21:07:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 10 pruned nodes, max_depth=5 [43]#011train-error:0.151#011validation-error:0.17 ###Markdown Testing the modelNow that we've fit our XGBoost model, it's time to see how well it performs. To do this we will use SageMakers Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. To perform a Batch Transformation we need to first create a transformer objects from our trained estimator object. ###Code xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ###Output _____no_output_____ ###Markdown Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line. ###Code xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') ###Output _____no_output_____ ###Markdown Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method. ###Code xgb_transformer.wait() ###Output .......................Arguments: serve [2019-10-17 21:13:37 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2019-10-17 21:13:37 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2019-10-17 21:13:37 +0000] [1] [INFO] Using worker: gevent [2019-10-17 21:13:37 +0000] [38] [INFO] Booting worker with pid: 38 [2019-10-17 21:13:37 +0000] [39] [INFO] Booting worker with pid: 39 [2019-10-17 21:13:37 +0000] [40] [INFO] Booting worker with pid: 40 [2019-10-17:21:13:37:INFO] Model loaded successfully for worker : 38 [2019-10-17:21:13:37:INFO] Model loaded successfully for worker : 39 [2019-10-17 21:13:37 +0000] [41] [INFO] Booting worker with pid: 41 [2019-10-17:21:13:37:INFO] Model loaded successfully for worker : 40 [2019-10-17:21:13:37:INFO] Model loaded successfully for worker : 41 2019-10-17T21:13:55.657:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2019-10-17:21:13:58:INFO] Sniff delimiter as ',' [2019-10-17:21:13:58:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:13:58:INFO] Sniff delimiter as ',' [2019-10-17:21:13:58:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:13:58:INFO] Sniff delimiter as ',' [2019-10-17:21:13:58:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:13:58:INFO] Sniff delimiter as ',' [2019-10-17:21:13:58:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:13:58:INFO] Sniff delimiter as ',' [2019-10-17:21:13:58:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:13:58:INFO] Sniff delimiter as ',' [2019-10-17:21:13:58:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:00:INFO] Sniff delimiter as ',' [2019-10-17:21:14:00:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:00:INFO] Sniff delimiter as ',' [2019-10-17:21:14:00:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:01:INFO] Sniff delimiter as ',' [2019-10-17:21:14:01:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:01:INFO] Sniff delimiter as ',' [2019-10-17:21:14:01:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:01:INFO] Sniff delimiter as ',' [2019-10-17:21:14:01:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:01:INFO] Sniff delimiter as ',' [2019-10-17:21:14:01:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:02:INFO] Sniff delimiter as ',' [2019-10-17:21:14:02:INFO] Sniff delimiter as ',' [2019-10-17:21:14:02:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:02:INFO] Sniff delimiter as ',' [2019-10-17:21:14:02:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:02:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:02:INFO] Sniff delimiter as ',' [2019-10-17:21:14:02:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:04:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:04:INFO] Sniff delimiter as ',' [2019-10-17:21:14:04:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:04:INFO] Sniff delimiter as ',' [2019-10-17:21:14:04:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:04:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:04:INFO] Sniff delimiter as ',' [2019-10-17:21:14:04:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:04:INFO] Sniff delimiter as ',' [2019-10-17:21:14:04:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:06:INFO] Sniff delimiter as ',' [2019-10-17:21:14:06:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:06:INFO] Sniff delimiter as ',' [2019-10-17:21:14:06:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:06:INFO] Sniff delimiter as ',' [2019-10-17:21:14:06:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:06:INFO] Sniff delimiter as ',' [2019-10-17:21:14:06:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:06:INFO] Sniff delimiter as ',' [2019-10-17:21:14:06:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:06:INFO] Sniff delimiter as ',' [2019-10-17:21:14:06:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:07:INFO] Sniff delimiter as ',' [2019-10-17:21:14:07:INFO] Sniff delimiter as ',' [2019-10-17:21:14:07:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:07:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:09:INFO] Sniff delimiter as ',' [2019-10-17:21:14:09:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:09:INFO] Sniff delimiter as ',' [2019-10-17:21:14:09:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:09:INFO] Sniff delimiter as ',' [2019-10-17:21:14:09:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:09:INFO] Sniff delimiter as ',' [2019-10-17:21:14:09:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:09:INFO] Sniff delimiter as ',' [2019-10-17:21:14:09:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:09:INFO] Sniff delimiter as ',' [2019-10-17:21:14:09:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:09:INFO] Sniff delimiter as ',' [2019-10-17:21:14:09:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:09:INFO] Sniff delimiter as ',' [2019-10-17:21:14:09:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:11:INFO] Sniff delimiter as ',' [2019-10-17:21:14:11:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:11:INFO] Sniff delimiter as ',' [2019-10-17:21:14:11:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:11:INFO] Sniff delimiter as ',' [2019-10-17:21:14:11:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:11:INFO] Sniff delimiter as ',' [2019-10-17:21:14:11:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:11:INFO] Sniff delimiter as ',' [2019-10-17:21:14:11:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:11:INFO] Sniff delimiter as ',' [2019-10-17:21:14:11:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:11:INFO] Sniff delimiter as ',' [2019-10-17:21:14:11:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:11:INFO] Sniff delimiter as ',' [2019-10-17:21:14:11:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:13:INFO] Sniff delimiter as ',' [2019-10-17:21:14:13:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:13:INFO] Sniff delimiter as ',' [2019-10-17:21:14:13:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:13:INFO] Sniff delimiter as ',' [2019-10-17:21:14:13:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:13:INFO] Sniff delimiter as ',' [2019-10-17:21:14:13:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:16:INFO] Sniff delimiter as ',' [2019-10-17:21:14:16:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:16:INFO] Sniff delimiter as ',' [2019-10-17:21:14:16:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:16:INFO] Sniff delimiter as ',' [2019-10-17:21:14:16:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:16:INFO] Sniff delimiter as ',' [2019-10-17:21:14:16:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:16:INFO] Sniff delimiter as ',' [2019-10-17:21:14:16:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:16:INFO] Sniff delimiter as ',' [2019-10-17:21:14:16:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:16:INFO] Sniff delimiter as ',' [2019-10-17:21:14:16:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:16:INFO] Sniff delimiter as ',' [2019-10-17:21:14:16:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:18:INFO] Sniff delimiter as ',' [2019-10-17:21:14:18:INFO] Sniff delimiter as ',' [2019-10-17:21:14:18:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:18:INFO] Sniff delimiter as ',' [2019-10-17:21:14:18:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:18:INFO] Sniff delimiter as ',' [2019-10-17:21:14:18:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:18:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:18:INFO] Sniff delimiter as ',' [2019-10-17:21:14:18:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:18:INFO] Sniff delimiter as ',' [2019-10-17:21:14:18:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:19:INFO] Sniff delimiter as ',' [2019-10-17:21:14:19:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:19:INFO] Sniff delimiter as ',' [2019-10-17:21:14:19:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:20:INFO] Sniff delimiter as ',' [2019-10-17:21:14:20:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:20:INFO] Sniff delimiter as ',' [2019-10-17:21:14:20:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:21:INFO] Sniff delimiter as ',' [2019-10-17:21:14:21:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:21:INFO] Sniff delimiter as ',' [2019-10-17:21:14:21:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:21:INFO] Sniff delimiter as ',' [2019-10-17:21:14:21:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:21:INFO] Sniff delimiter as ',' [2019-10-17:21:14:21:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:21:INFO] Sniff delimiter as ',' [2019-10-17:21:14:21:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:14:21:INFO] Sniff delimiter as ',' [2019-10-17:21:14:21:INFO] Determined delimiter of CSV input is ',' ###Markdown Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-651156768626/xgboost-2019-10-17-21-09-56-396/test.csv.out to ../data/sentiment_update/test.csv.out ###Markdown The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions) ###Output _____no_output_____ ###Markdown Step 5: Looking at New DataSo now we have an XGBoost sentiment analysis model that we believe is working pretty well. As a result, we deployed it and we are using it in some sort of app.However, as we allow users to use our app we periodically record submitted movie reviews so that we can perform some quality control on our deployed model. Once we've accumulated enough reviews we go through them by hand and evaluate whether they are positive or negative (there are many ways you might do this in practice aside from by hand). The reason for doing this is so that we can check to see how well our model is doing. ###Code import new_data new_X, new_Y = new_data.get_new_data() ###Output _____no_output_____ ###Markdown **NOTE:** Part of the fun in this notebook is trying to figure out what exactly is happening with the new data, so try not to cheat by looking in the `new_data` module. Also, the `new_data` module assumes that the cache created earlier in Step 3 is still stored in `../cache/sentiment_analysis`. (TODO) Testing the current modelNow that we've loaded the new data, let's check to see how our current XGBoost model performs on it.First, note that the data that has been loaded has already been pre-processed so that each entry in `new_X` is a list of words that have been processed using `nltk`. However, we have not yet constructed the bag of words encoding, which we will do now.First, we use the vocabulary that we constructed earlier using the original training data to construct a `CountVectorizer` which we will use to transform our new data into its bag of words encoding.**TODO:** Create the CountVectorizer object using the vocabulary created earlier and use it to transform the new data. ###Code # TODO: Create the CountVectorizer using the previously constructed vocabulary vectorizer = CountVectorizer(vocabulary=vocabulary, preprocessor=lambda x: x, tokenizer=lambda x: x) # TODO: Transform our new data set and store the transformed data in the variable new_XV new_XV = vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown As a quick sanity check, we make sure that the length of each of our bag of words encoded reviews is correct. In particular, it must be the same size as the vocabulary which in our case is `5000`. ###Code len(new_XV[100]) ###Output _____no_output_____ ###Markdown Now that we've performed the data processing that is required by our model we can save it locally and then upload it to S3 so that we can construct a batch transform job in order to see how well our model is working.First, we save the data locally.**TODO:** Save the new data (after it has been transformed using the original vocabulary) to the local notebook instance. ###Code # TODO: Save the data contained in new_XV locally in the data_dir with the file name new_data.csv pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Next, we upload the data to S3.**TODO:** Upload the csv file created above to S3. ###Code # TODO: Upload the new_data.csv file contained in the data_dir folder to S3 and save the resulting # URI as new_data_location new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Then, once the new data has been uploaded to S3, we create and run the batch transform job to get our model's predictions about the sentiment of the new movie reviews.**TODO:** Using the `xgb_transformer` object that was created earlier (at the end of Step 4 to test the XGBoost model), transform the data located at `new_data_location`. ###Code # TODO: Using xgb_transformer, transform the new_data_location data. You may wish to **wait** until # the batch transform job has finished. xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ###Output .....................Arguments: serve [2019-10-17 21:36:56 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2019-10-17 21:36:56 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2019-10-17 21:36:56 +0000] [1] [INFO] Using worker: gevent [2019-10-17 21:36:56 +0000] [39] [INFO] Booting worker with pid: 39 [2019-10-17 21:36:56 +0000] [40] [INFO] Booting worker with pid: 40 [2019-10-17 21:36:56 +0000] [41] [INFO] Booting worker with pid: 41 [2019-10-17 21:36:56 +0000] [42] [INFO] Booting worker with pid: 42 [2019-10-17:21:36:56:INFO] Model loaded successfully for worker : 39 [2019-10-17:21:36:56:INFO] Model loaded successfully for worker : 40 [2019-10-17:21:36:56:INFO] Model loaded successfully for worker : 41 [2019-10-17:21:36:56:INFO] Model loaded successfully for worker : 42 2019-10-17T21:37:17.896:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2019-10-17:21:37:20:INFO] Sniff delimiter as ',' [2019-10-17:21:37:20:INFO] Sniff delimiter as ',' [2019-10-17:21:37:20:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:20:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:20:INFO] Sniff delimiter as ',' [2019-10-17:21:37:20:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:20:INFO] Sniff delimiter as ',' [2019-10-17:21:37:20:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:21:INFO] Sniff delimiter as ',' [2019-10-17:21:37:21:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:20:INFO] Sniff delimiter as ',' [2019-10-17:21:37:20:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:20:INFO] Sniff delimiter as ',' [2019-10-17:21:37:20:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:21:INFO] Sniff delimiter as ',' [2019-10-17:21:37:21:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:22:INFO] Sniff delimiter as ',' [2019-10-17:21:37:22:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:23:INFO] Sniff delimiter as ',' [2019-10-17:21:37:23:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:22:INFO] Sniff delimiter as ',' [2019-10-17:21:37:22:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:23:INFO] Sniff delimiter as ',' [2019-10-17:21:37:23:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:23:INFO] Sniff delimiter as ',' [2019-10-17:21:37:23:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:23:INFO] Sniff delimiter as ',' [2019-10-17:21:37:23:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:23:INFO] Sniff delimiter as ',' [2019-10-17:21:37:23:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:23:INFO] Sniff delimiter as ',' [2019-10-17:21:37:23:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:25:INFO] Sniff delimiter as ',' [2019-10-17:21:37:25:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:26:INFO] Sniff delimiter as ',' [2019-10-17:21:37:26:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:25:INFO] Sniff delimiter as ',' [2019-10-17:21:37:25:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:26:INFO] Sniff delimiter as ',' [2019-10-17:21:37:26:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:27:INFO] Sniff delimiter as ',' [2019-10-17:21:37:27:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:27:INFO] Sniff delimiter as ',' [2019-10-17:21:37:27:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:27:INFO] Sniff delimiter as ',' [2019-10-17:21:37:27:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:27:INFO] Sniff delimiter as ',' [2019-10-17:21:37:27:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:28:INFO] Sniff delimiter as ',' [2019-10-17:21:37:28:INFO] Sniff delimiter as ',' [2019-10-17:21:37:28:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:28:INFO] Sniff delimiter as ',' [2019-10-17:21:37:28:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:28:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:28:INFO] Sniff delimiter as ',' [2019-10-17:21:37:28:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:30:INFO] Sniff delimiter as ',' [2019-10-17:21:37:30:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:30:INFO] Sniff delimiter as ',' [2019-10-17:21:37:30:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:30:INFO] Sniff delimiter as ',' [2019-10-17:21:37:30:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:30:INFO] Sniff delimiter as ',' [2019-10-17:21:37:30:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:30:INFO] Sniff delimiter as ',' [2019-10-17:21:37:30:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:30:INFO] Sniff delimiter as ',' [2019-10-17:21:37:30:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:30:INFO] Sniff delimiter as ',' [2019-10-17:21:37:30:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:30:INFO] Sniff delimiter as ',' [2019-10-17:21:37:30:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:32:INFO] Sniff delimiter as ',' [2019-10-17:21:37:32:INFO] Sniff delimiter as ',' [2019-10-17:21:37:32:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:32:INFO] Sniff delimiter as ',' [2019-10-17:21:37:32:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:33:INFO] Sniff delimiter as ',' [2019-10-17:21:37:33:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:32:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:32:INFO] Sniff delimiter as ',' [2019-10-17:21:37:32:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:33:INFO] Sniff delimiter as ',' [2019-10-17:21:37:33:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:33:INFO] Sniff delimiter as ',' [2019-10-17:21:37:33:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:33:INFO] Sniff delimiter as ',' [2019-10-17:21:37:33:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:35:INFO] Sniff delimiter as ',' [2019-10-17:21:37:35:INFO] Sniff delimiter as ',' [2019-10-17:21:37:35:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:35:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:35:INFO] Sniff delimiter as ',' [2019-10-17:21:37:35:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:35:INFO] Sniff delimiter as ',' [2019-10-17:21:37:35:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:35:INFO] Sniff delimiter as ',' [2019-10-17:21:37:35:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:35:INFO] Sniff delimiter as ',' [2019-10-17:21:37:35:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:37:INFO] Sniff delimiter as ',' [2019-10-17:21:37:37:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:37:INFO] Sniff delimiter as ',' [2019-10-17:21:37:37:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:37:INFO] Sniff delimiter as ',' [2019-10-17:21:37:37:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:38:INFO] Sniff delimiter as ',' [2019-10-17:21:37:38:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:37:INFO] Sniff delimiter as ',' [2019-10-17:21:37:37:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:37:INFO] Sniff delimiter as ',' [2019-10-17:21:37:37:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:37:INFO] Sniff delimiter as ',' [2019-10-17:21:37:37:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:38:INFO] Sniff delimiter as ',' [2019-10-17:21:37:38:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:39:INFO] Sniff delimiter as ',' [2019-10-17:21:37:39:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:39:INFO] Sniff delimiter as ',' [2019-10-17:21:37:39:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:40:INFO] Sniff delimiter as ',' [2019-10-17:21:37:40:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:39:INFO] Sniff delimiter as ',' [2019-10-17:21:37:39:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:39:INFO] Sniff delimiter as ',' [2019-10-17:21:37:39:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:40:INFO] Sniff delimiter as ',' [2019-10-17:21:37:40:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:40:INFO] Sniff delimiter as ',' [2019-10-17:21:37:40:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:40:INFO] Sniff delimiter as ',' [2019-10-17:21:37:40:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:42:INFO] Sniff delimiter as ',' [2019-10-17:21:37:42:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:42:INFO] Sniff delimiter as ',' [2019-10-17:21:37:42:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:42:INFO] Sniff delimiter as ',' [2019-10-17:21:37:42:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:42:INFO] Sniff delimiter as ',' [2019-10-17:21:37:42:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:42:INFO] Sniff delimiter as ',' [2019-10-17:21:37:42:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:42:INFO] Sniff delimiter as ',' [2019-10-17:21:37:42:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:42:INFO] Sniff delimiter as ',' [2019-10-17:21:37:42:INFO] Determined delimiter of CSV input is ',' [2019-10-17:21:37:42:INFO] Sniff delimiter as ',' [2019-10-17:21:37:42:INFO] Determined delimiter of CSV input is ',' ###Markdown As usual, we copy the results of the batch transform job to our local instance. ###Code !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ###Output Completed 256.0 KiB/369.4 KiB (3.5 MiB/s) with 1 file(s) remaining Completed 369.4 KiB/369.4 KiB (4.9 MiB/s) with 1 file(s) remaining download: s3://sagemaker-us-east-1-651156768626/xgboost-2019-10-17-21-33-39-230/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown Read in the results of the batch transform job. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] ###Output _____no_output_____ ###Markdown And check the accuracy of our current model. ###Code accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown So it would appear that *something* has changed since our model is no longer (as) effective at determining the sentiment of a user provided review.In a real life scenario you would check a number of different things to see what exactly is going on. In our case, we are only going to check one and that is whether some aspect of the underlying distribution has changed. In other words, we want to see if the words that appear in our new collection of reviews matches the words that appear in the original training set. Of course, we want to narrow our scope a little bit so we will only look at the `5000` most frequently appearing words in each data set, or in other words, the vocabulary generated by each data set.Before doing that, however, let's take a look at some of the incorrectly classified reviews in the new data set.To start, we will deploy the original XGBoost model. We will then use the deployed model to infer the sentiment of some of the new reviews. This will also serve as a nice excuse to deploy our model so that we can mimic a real life scenario where we have a model that has been deployed and is being used in production.**TODO:** Deploy the XGBoost model. ###Code # TODO: Deploy the model that was created earlier. Recall that the object name is 'xgb'. xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') ###Output WARNING:sagemaker:Using already existing model: xgboost-2019-10-17-21-03-11-567 ###Markdown Diagnose the problemNow that we have our deployed "production" model, we can send some of our new data to it and filter out some of the incorrectly classified reviews. ###Code from sagemaker.predictor import csv_serializer # We need to tell the endpoint what format the data we are sending is in so that SageMaker can perform the serialization. xgb_predictor.content_type = 'text/csv' xgb_predictor.serializer = csv_serializer ###Output _____no_output_____ ###Markdown It will be useful to look at a few different examples of incorrectly classified reviews so we will start by creating a *generator* which we will use to iterate through some of the new reviews and find ones that are incorrect.**NOTE:** Understanding what Python generators are isn't really required for this module. The reason we use them here is so that we don't have to iterate through all of the new reviews, searching for incorrectly classified samples. ###Code def get_sample(in_X, in_XV, in_Y): for idx, smp in enumerate(in_X): res = round(float(xgb_predictor.predict(in_XV[idx]))) if res != in_Y[idx]: yield smp, in_Y[idx] gn = get_sample(new_X, new_XV, new_Y) ###Output _____no_output_____ ###Markdown At this point, `gn` is the *generator* which generates samples from the new data set which are not classified correctly. To get the *next* sample we simply call the `next` method on our generator. ###Code print(next(gn)) ###Output (['spoiler', 'let', 'start', 'good', 'film', 'servic', 'act', 'cynthia', 'rothrock', 'richard', 'norton', 'rest', 'act', 'aw', 'aid', 'atroci', 'script', 'worst', 'culprit', 'villain', 'buntao', 'head', 'asian', 'crime', 'syndic', 'play', 'fran', 'tumbuan', 'laugh', 'head', 'express', 'furi', 'lost', 'bunch', 'money', 'horrid', 'perform', 'patrick', 'muldoon', 'much', 'better', 'hostil', 'takeov', 'line', 'remaind', 'titl', 'film', 'deliv', 'badli', 'one', 'could', 'main', 'charact', 'actor', 'actress', 'distinguish', 'film', 'next', 'come', 'plot', 'tell', 'need', 'know', 'origin', 'rage', 'honor', 'cynthia', 'rothrock', 'play', 'chri', 'fairchild', 'teacher', 'inner', 'citi', 'c', 'agent', 'government', 'agenc', 'sorri', 'film', 'bad', 'even', 'rememb', 'hmmm', 'imagin', 'c', 'applic', 'process', 'like', 'interview', 'past', 'job', 'experi', 'chri', 'teacher', 'interview', 'okay', 'hire', 'give', '2', 'decent', 'act', 'nice', 'plot', 'twist', 'end', 'though', 'know', 'tommi', 'muldoon', 'secret', 'villain', 'caught', 'banana'], 1) ###Markdown After looking at a few examples, maybe we decide to look at the most frequently appearing `5000` words in each data set, the original training data set and the new data set. The reason for looking at this might be that we expect the frequency of use of different words to have changed, maybe there is some new slang that has been introduced or some other artifact of popular culture that has changed the way that people write movie reviews.To do this, we start by fitting a `CountVectorizer` to the new data. ###Code new_vectorizer = CountVectorizer(max_features=5000, preprocessor=lambda x: x, tokenizer=lambda x: x) new_vectorizer.fit(new_X) ###Output _____no_output_____ ###Markdown Now that we have this new `CountVectorizor` object, we can check to see if the corresponding vocabulary has changed between the two data sets. ###Code original_vocabulary = set(vocabulary.keys()) new_vocabulary = set(new_vectorizer.vocabulary_.keys()) ###Output _____no_output_____ ###Markdown We can look at the words that were in the original vocabulary but not in the new vocabulary. ###Code print(original_vocabulary - new_vocabulary) ###Output {'21st', 'spill', 'reincarn', 'ghetto', 'weari', 'victorian', 'playboy'} ###Markdown And similarly, we can look at the words that are in the new vocabulary but which were not in the original vocabulary. ###Code print(new_vocabulary - original_vocabulary) ###Output {'dubiou', 'orchestr', 'sophi', 'masterson', 'omin', 'banana', 'optimist'} ###Markdown These words themselves don't tell us much, however if one of these words occured with a large frequency, that might tell us something. In particular, we wouldn't really expect any of the words above to appear with too much frequency.**Question** What exactly is going on here. Not only what (if any) words appear with a larger than expected frequency but also, what does this mean? What has changed about the world that our original model no longer takes into account?**NOTE:** This is meant to be a very open ended question. To investigate you may need more cells than the one provided below. Also, there isn't really a *correct* answer, this is meant to be an opportunity to explore the data. (TODO) Build a new modelSupposing that we believe something has changed about the underlying distribution of the words that our reviews are made up of, we need to create a new model. This way our new model will take into account whatever it is that has changed.To begin with, we will use the new vocabulary to create a bag of words encoding of the new data. We will then use this data to train a new XGBoost model.**NOTE:** Because we believe that the underlying distribution of words has changed it should follow that the original vocabulary that we used to construct a bag of words encoding of the reviews is no longer valid. This means that we need to be careful with our data. If we send an bag of words encoded review using the *original* vocabulary we should not expect any sort of meaningful results.In particular, this means that if we had deployed our XGBoost model like we did in the Web App notebook then we would need to implement this vocabulary change in the Lambda function as well. ###Code new_XV = new_vectorizer.transform(new_X).toarray() ###Output _____no_output_____ ###Markdown And a quick check to make sure that the newly encoded reviews have the correct length, which should be the size of the new vocabulary which we created. ###Code len(new_XV[0]) ###Output _____no_output_____ ###Markdown Now that we have our newly encoded, newly collected data, we can split it up into a training and validation set so that we can train a new XGBoost model. As usual, we first split up the data, then save it locally and then upload it to S3. ###Code import pandas as pd # Earlier we shuffled the training dataset so to make things simple we can just assign # the first 10 000 reviews to the validation set and use the remaining reviews for training. new_val_X = pd.DataFrame(new_XV[:10000]) new_train_X = pd.DataFrame(new_XV[10000:]) new_val_y = pd.DataFrame(new_Y[:10000]) new_train_y = pd.DataFrame(new_Y[10000:]) ###Output _____no_output_____ ###Markdown In order to save some memory we will effectively delete the `new_X` variable. Remember that this contained a list of reviews and each review was a list of words. Note that once this cell has been executed you will need to read the new data in again if you want to work with it. ###Code new_X = None ###Output _____no_output_____ ###Markdown Next we save the new training and validation sets locally. Note that we overwrite the training and validation sets used earlier. This is mostly because the amount of space that we have available on our notebook instance is limited. Of course, you can increase this if you'd like but to do so may increase the cost of running the notebook instance. ###Code pd.DataFrame(new_XV).to_csv(os.path.join(data_dir, 'new_data.csv'), header=False, index=False) pd.concat([new_val_y, new_val_X], axis=1).to_csv(os.path.join(data_dir, 'new_validation.csv'), header=False, index=False) pd.concat([new_train_y, new_train_X], axis=1).to_csv(os.path.join(data_dir, 'new_train.csv'), header=False, index=False) ###Output _____no_output_____ ###Markdown Now that we've saved our data to the local instance, we can safely delete the variables to save on memory. ###Code new_val_y = new_val_X = new_train_y = new_train_X = new_XV = None ###Output _____no_output_____ ###Markdown Lastly, we make sure to upload the new training and validation sets to S3.**TODO:** Upload the new data as well as the new training and validation data sets to S3. ###Code # TODO: Upload the new data and the new validation.csv and train.csv files in the data_dir directory to S3. new_data_location = session.upload_data(os.path.join(data_dir, 'new_data.csv'), key_prefix=prefix) new_val_location = session.upload_data(os.path.join(data_dir, 'new_validation.csv'), key_prefix=prefix) new_train_location = session.upload_data(os.path.join(data_dir, 'new_train.csv'), key_prefix=prefix) ###Output _____no_output_____ ###Markdown Once our new training data has been uploaded to S3, we can create a new XGBoost model that will take into account the changes that have occured in our data set.**TODO:** Create a new XGBoost estimator object. ###Code # TODO: First, create a SageMaker estimator object for our model. new_xgb = sagemaker.estimator.Estimator( container, role, train_instance_count=1, train_instance_type="ml.m4.xlarge", output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session ) # TODO: Then set the algorithm specific parameters. You may wish to use the same parameters that were # used when training the original model. new_xgb.set_hyperparameters( max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective='binary:logistic', early_stopping_rounds=10, num_round=500 ) ###Output _____no_output_____ ###Markdown Once the model has been created, we can train it with our new data.**TODO:** Train the new XGBoost model. ###Code # TODO: First, make sure that you create s3 input objects so that SageMaker knows where to # find the training and validation data. s3_new_input_train = sagemaker.s3_input(s3_data=new_train_location, content_type='csv') s3_new_input_validation = sagemaker.s3_input(s3_data=new_val_location, content_type='csv') # TODO: Using the new validation and training data, 'fit' your new model. new_xgb.fit({'train': s3_new_input_train, 'validation': s3_new_input_validation}) ###Output 2019-10-17 21:58:22 Starting - Starting the training job... 2019-10-17 21:58:24 Starting - Launching requested ML instances...... 2019-10-17 21:59:25 Starting - Preparing the instances for training...... 2019-10-17 22:00:46 Downloading - Downloading input data 2019-10-17 22:00:46 Training - Downloading the training image.. 2019-10-17 22:01:05 Training - Training image download completed. Training in progress.Arguments: train [2019-10-17:22:01:06:INFO] Running standalone xgboost training. [2019-10-17:22:01:06:INFO] File size need to be processed in the node: 238.47mb. Available memory size in the node: 8595.61mb [2019-10-17:22:01:06:INFO] Determined delimiter of CSV input is ',' [22:01:06] S3DistributionType set as FullyReplicated [22:01:08] 15000x5000 matrix with 75000000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=, [2019-10-17:22:01:08:INFO] Determined delimiter of CSV input is ',' [22:01:08] S3DistributionType set as FullyReplicated [22:01:09] 10000x5000 matrix with 50000000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=, [22:01:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 4 pruned nodes, max_depth=5 [0]#011train-error:0.3126#011validation-error:0.3098 Multiple eval metrics have been passed: 'validation-error' will be used for early stopping.  Will train until validation-error hasn't improved in 10 rounds. [22:01:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 6 pruned nodes, max_depth=5 [1]#011train-error:0.286133#011validation-error:0.2822 [22:01:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [2]#011train-error:0.2872#011validation-error:0.282 [22:01:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [3]#011train-error:0.285133#011validation-error:0.2802 [22:01:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [4]#011train-error:0.2656#011validation-error:0.2615 [22:01:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [5]#011train-error:0.264#011validation-error:0.2614 [22:01:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 6 pruned nodes, max_depth=5 [6]#011train-error:0.2522#011validation-error:0.2556 [22:01:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 4 pruned nodes, max_depth=5 [7]#011train-error:0.248#011validation-error:0.25 [22:01:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5 [8]#011train-error:0.2468#011validation-error:0.2515 [22:01:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 8 pruned nodes, max_depth=5 [9]#011train-error:0.240867#011validation-error:0.247 [22:01:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 10 pruned nodes, max_depth=5 [10]#011train-error:0.236#011validation-error:0.2409 [22:01:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 2 pruned nodes, max_depth=5 [11]#011train-error:0.230867#011validation-error:0.2373 [22:01:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 4 pruned nodes, max_depth=5 [12]#011train-error:0.227133#011validation-error:0.2324 [22:01:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 14 pruned nodes, max_depth=5 [13]#011train-error:0.220133#011validation-error:0.2265 [22:01:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [14]#011train-error:0.217067#011validation-error:0.2222 [22:01:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 10 pruned nodes, max_depth=5 [15]#011train-error:0.215267#011validation-error:0.2212 [22:01:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 4 pruned nodes, max_depth=5 [16]#011train-error:0.211467#011validation-error:0.2185 [22:01:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [17]#011train-error:0.209133#011validation-error:0.2172 [22:01:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5 [18]#011train-error:0.207067#011validation-error:0.2158 [22:01:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 8 pruned nodes, max_depth=5 [19]#011train-error:0.205533#011validation-error:0.2136 [22:01:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [20]#011train-error:0.2036#011validation-error:0.2112 [22:01:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5 [21]#011train-error:0.201467#011validation-error:0.2099 [22:01:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 2 pruned nodes, max_depth=5 [22]#011train-error:0.1976#011validation-error:0.2089 [22:01:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 4 pruned nodes, max_depth=5 [23]#011train-error:0.194733#011validation-error:0.2059 [22:01:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5 [24]#011train-error:0.193067#011validation-error:0.2049 [22:01:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [25]#011train-error:0.191067#011validation-error:0.2033 [22:01:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 12 pruned nodes, max_depth=5 [26]#011train-error:0.187333#011validation-error:0.2021 [22:01:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 10 pruned nodes, max_depth=5 [27]#011train-error:0.185467#011validation-error:0.2009 [22:01:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5 [28]#011train-error:0.184067#011validation-error:0.1999 [22:01:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 12 pruned nodes, max_depth=5 [29]#011train-error:0.182533#011validation-error:0.1991 [22:01:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [30]#011train-error:0.180133#011validation-error:0.1974 [22:01:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [31]#011train-error:0.1774#011validation-error:0.1959 [22:01:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5 [32]#011train-error:0.1758#011validation-error:0.1954 [22:01:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 10 pruned nodes, max_depth=5 [33]#011train-error:0.173733#011validation-error:0.1943 [22:01:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 14 pruned nodes, max_depth=5 [34]#011train-error:0.1728#011validation-error:0.1933 [22:01:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [35]#011train-error:0.17#011validation-error:0.1928 [22:01:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 6 pruned nodes, max_depth=5 [36]#011train-error:0.1692#011validation-error:0.1929 [22:02:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 8 pruned nodes, max_depth=5 [37]#011train-error:0.169533#011validation-error:0.193 [22:02:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 6 pruned nodes, max_depth=5 [38]#011train-error:0.1674#011validation-error:0.192 [22:02:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 6 pruned nodes, max_depth=5 [39]#011train-error:0.166867#011validation-error:0.1925 [22:02:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 12 pruned nodes, max_depth=5 [40]#011train-error:0.164867#011validation-error:0.1918 [22:02:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 6 pruned nodes, max_depth=5 [41]#011train-error:0.163333#011validation-error:0.1906 [22:02:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 8 pruned nodes, max_depth=5 [42]#011train-error:0.1622#011validation-error:0.1908 [22:02:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 4 pruned nodes, max_depth=5 [43]#011train-error:0.1608#011validation-error:0.1896 ###Markdown (TODO) Check the new modelSo now we have a new XGBoost model that we believe more accurately represents the state of the world at this time, at least in how it relates to the sentiment analysis problem that we are working on. The next step is to double check that our model is performing reasonably.To do this, we will first test our model on the new data.**Note:** In practice this is a pretty bad idea. We already trained our model on the new data, so testing it shouldn't really tell us much. In fact, this is sort of a textbook example of leakage. We are only doing it here so that we have a numerical baseline.**Question:** How might you address the leakage problem? First, we create a new transformer based on our new XGBoost model.**TODO:** Create a transformer object from the newly created XGBoost model. ###Code # TODO: Create a transformer object from the new_xgb model new_xgb_transformer = new_xgb.transformer(instance_count=1, instance_type="ml.m4.xlarge") ###Output _____no_output_____ ###Markdown Next we test our model on the new data.**TODO:** Use the transformer object to transform the new data (stored in the `new_data_location` variable) ###Code # TODO: Using new_xgb_transformer, transform the new_data_location data. You may wish to # 'wait' for the transform job to finish. new_xgb_transformer.transform(new_data_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() ###Output ..................Arguments: serve [2019-10-17 22:06:52 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2019-10-17 22:06:52 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2019-10-17 22:06:52 +0000] [1] [INFO] Using worker: gevent [2019-10-17 22:06:52 +0000] [38] [INFO] Booting worker with pid: 38 [2019-10-17 22:06:52 +0000] [39] [INFO] Booting worker with pid: 39 [2019-10-17 22:06:52 +0000] [40] [INFO] Booting worker with pid: 40 [2019-10-17 22:06:52 +0000] [41] [INFO] Booting worker with pid: 41 [2019-10-17:22:06:52:INFO] Model loaded successfully for worker : 38 [2019-10-17:22:06:52:INFO] Model loaded successfully for worker : 39 [2019-10-17:22:06:52:INFO] Model loaded successfully for worker : 41 [2019-10-17:22:06:52:INFO] Model loaded successfully for worker : 40 [2019-10-17:22:07:10:INFO] Sniff delimiter as ',' [2019-10-17:22:07:10:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:10:INFO] Sniff delimiter as ',' [2019-10-17:22:07:10:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:11:INFO] Sniff delimiter as ',' [2019-10-17:22:07:11:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:10:INFO] Sniff delimiter as ',' [2019-10-17:22:07:10:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:10:INFO] Sniff delimiter as ',' [2019-10-17:22:07:10:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:11:INFO] Sniff delimiter as ',' [2019-10-17:22:07:11:INFO] Determined delimiter of CSV input is ',' 2019-10-17T22:07:08.217:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2019-10-17:22:07:12:INFO] Sniff delimiter as ',' [2019-10-17:22:07:12:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:12:INFO] Sniff delimiter as ',' [2019-10-17:22:07:12:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:12:INFO] Sniff delimiter as ',' [2019-10-17:22:07:12:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:12:INFO] Sniff delimiter as ',' [2019-10-17:22:07:12:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:12:INFO] Sniff delimiter as ',' [2019-10-17:22:07:12:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:12:INFO] Sniff delimiter as ',' [2019-10-17:22:07:12:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:14:INFO] Sniff delimiter as ',' [2019-10-17:22:07:14:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:14:INFO] Sniff delimiter as ',' [2019-10-17:22:07:14:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:14:INFO] Sniff delimiter as ',' [2019-10-17:22:07:14:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:15:INFO] Sniff delimiter as ',' [2019-10-17:22:07:14:INFO] Sniff delimiter as ',' [2019-10-17:22:07:14:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:14:INFO] Sniff delimiter as ',' [2019-10-17:22:07:14:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:14:INFO] Sniff delimiter as ',' [2019-10-17:22:07:14:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:15:INFO] Sniff delimiter as ',' [2019-10-17:22:07:15:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:15:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:17:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:17:INFO] Sniff delimiter as ',' [2019-10-17:22:07:17:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:17:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:17:INFO] Sniff delimiter as ',' [2019-10-17:22:07:17:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:17:INFO] Sniff delimiter as ',' [2019-10-17:22:07:17:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:17:INFO] Sniff delimiter as ',' [2019-10-17:22:07:17:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:17:INFO] Sniff delimiter as ',' [2019-10-17:22:07:17:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:17:INFO] Sniff delimiter as ',' [2019-10-17:22:07:17:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:19:INFO] Sniff delimiter as ',' [2019-10-17:22:07:19:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:19:INFO] Sniff delimiter as ',' [2019-10-17:22:07:19:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:19:INFO] Sniff delimiter as ',' [2019-10-17:22:07:19:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:19:INFO] Sniff delimiter as ',' [2019-10-17:22:07:19:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:19:INFO] Sniff delimiter as ',' [2019-10-17:22:07:19:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:20:INFO] Sniff delimiter as ',' [2019-10-17:22:07:20:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:19:INFO] Sniff delimiter as ',' [2019-10-17:22:07:19:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:20:INFO] Sniff delimiter as ',' [2019-10-17:22:07:20:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:21:INFO] Sniff delimiter as ',' [2019-10-17:22:07:21:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:21:INFO] Sniff delimiter as ',' [2019-10-17:22:07:21:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:22:INFO] Sniff delimiter as ',' [2019-10-17:22:07:22:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:22:INFO] Sniff delimiter as ',' [2019-10-17:22:07:22:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:21:INFO] Sniff delimiter as ',' [2019-10-17:22:07:21:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:21:INFO] Sniff delimiter as ',' [2019-10-17:22:07:21:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:22:INFO] Sniff delimiter as ',' [2019-10-17:22:07:22:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:22:INFO] Sniff delimiter as ',' [2019-10-17:22:07:22:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:24:INFO] Sniff delimiter as ',' [2019-10-17:22:07:24:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:24:INFO] Sniff delimiter as ',' [2019-10-17:22:07:24:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:24:INFO] Sniff delimiter as ',' [2019-10-17:22:07:24:INFO] Sniff delimiter as ',' [2019-10-17:22:07:24:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:24:INFO] Sniff delimiter as ',' [2019-10-17:22:07:24:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:24:INFO] Sniff delimiter as ',' [2019-10-17:22:07:24:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:24:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:24:INFO] Sniff delimiter as ',' [2019-10-17:22:07:24:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:24:INFO] Sniff delimiter as ',' [2019-10-17:22:07:24:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:26:INFO] Sniff delimiter as ',' [2019-10-17:22:07:26:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:26:INFO] Sniff delimiter as ',' [2019-10-17:22:07:26:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:26:INFO] Sniff delimiter as ',' [2019-10-17:22:07:26:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:26:INFO] Sniff delimiter as ',' [2019-10-17:22:07:26:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:27:INFO] Sniff delimiter as ',' [2019-10-17:22:07:27:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:26:INFO] Sniff delimiter as ',' [2019-10-17:22:07:26:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:26:INFO] Sniff delimiter as ',' [2019-10-17:22:07:26:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:27:INFO] Sniff delimiter as ',' [2019-10-17:22:07:27:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:28:INFO] Sniff delimiter as ',' [2019-10-17:22:07:28:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:29:INFO] Sniff delimiter as ',' [2019-10-17:22:07:29:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:29:INFO] Sniff delimiter as ',' [2019-10-17:22:07:29:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:29:INFO] Sniff delimiter as ',' [2019-10-17:22:07:29:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:28:INFO] Sniff delimiter as ',' [2019-10-17:22:07:28:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:29:INFO] Sniff delimiter as ',' [2019-10-17:22:07:29:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:29:INFO] Sniff delimiter as ',' [2019-10-17:22:07:29:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:29:INFO] Sniff delimiter as ',' [2019-10-17:22:07:29:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:31:INFO] Sniff delimiter as ',' [2019-10-17:22:07:31:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:31:INFO] Sniff delimiter as ',' [2019-10-17:22:07:31:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:31:INFO] Sniff delimiter as ',' [2019-10-17:22:07:31:INFO] Determined delimiter of CSV input is ',' [2019-10-17:22:07:31:INFO] Sniff delimiter as ',' [2019-10-17:22:07:31:INFO] Determined delimiter of CSV input is ',' ###Markdown Copy the results to our local instance. ###Code !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir ###Output download: s3://sagemaker-us-east-1-651156768626/xgboost-2019-10-17-22-03-36-019/new_data.csv.out to ../data/sentiment_update/new_data.csv.out ###Markdown And see how well the model did. ###Code predictions = pd.read_csv(os.path.join(data_dir, 'new_data.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(new_Y, predictions) ###Output _____no_output_____ ###Markdown As expected, since we trained the model on this data, our model performs pretty well. So, we have reason to believe that our new XGBoost model is a "better" model.However, before we start changing our deployed model, we should first make sure that our new model isn't too different. In other words, if our new model performed really poorly on the original test data then this might be an indication that something else has gone wrong.To start with, since we got rid of the variable that stored the original test reviews, we will read them in again from the cache that we created in Step 3. Note that we need to make sure that we read in the original test data after it has been pre-processed with `nltk` but before it has been bag of words encoded. This is because we need to use the new vocabulary instead of the original one. ###Code cache_data = None with open(os.path.join(cache_dir, "preprocessed_data.pkl"), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", "preprocessed_data.pkl") test_X = cache_data['words_test'] test_Y = cache_data['labels_test'] # Here we set cache_data to None so that it doesn't occupy memory cache_data = None ###Output Read preprocessed data from cache file: preprocessed_data.pkl ###Markdown Once we've loaded the original test reviews, we need to create a bag of words encoding of them using the new vocabulary that we created, based on the new data.**TODO:** Transform the original test data using the new vocabulary. ###Code # TODO: Use the new_vectorizer object that you created earlier to transform the test_X data. test_X = new_vectorizer.transform(test_X).toarray() ###Output _____no_output_____ ###Markdown Now that we have correctly encoded the original test data, we can write it to the local instance, upload it to S3 and test it. ###Code pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) new_xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') new_xgb_transformer.wait() !aws s3 cp --recursive $new_xgb_transformer.output_path $data_dir predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] accuracy_score(test_Y, predictions) ###Output _____no_output_____ ###Markdown It would appear that our new XGBoost model is performing quite well on the old test data. This gives us some indication that our new model should be put into production and replace our original model. Step 6: (TODO) Updating the ModelSo we have a new model that we'd like to use instead of one that is already deployed. Furthermore, we are assuming that the model that is already deployed is being used in some sort of application. As a result, what we want to do is update the existing endpoint so that it uses our new model.Of course, to do this we need to create an endpoint configuration for our newly created model.First, note that we can access the name of the model that we created above using the `model_name` property of the transformer. The reason for this is that in order for the transformer to create a batch transform job it needs to first create the model object inside of SageMaker. Since we've sort of already done this we should take advantage of it. ###Code new_xgb_transformer.model_name ###Output _____no_output_____ ###Markdown Next, we create an endpoint configuration using the low level approach of creating the dictionary object which describes the endpoint configuration we want.**TODO:** Using the low level approach, create a new endpoint configuration. Don't forget that it needs a name and that the name needs to be unique. If you get stuck, try looking at the Boston Housing Low Level Deployment tutorial notebook. ###Code from time import gmtime, strftime # TODO: Give our endpoint configuration a name. Remember, it needs to be unique. new_xgb_endpoint_config_name = "sentiment-xgboost-endpoint-config-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) # TODO: Using the SageMaker Client, construct the endpoint configuration. new_xgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config( EndpointConfigName=new_xgb_endpoint_config_name, ProductionVariants=[ { "VariantName": "AllTraffic", "InstanceType": "ml.m4.xlarge", "InitialInstanceCount": 1, "InitialVariantWeight": 1, "ModelName": new_xgb_transformer.model_name } ] ) ###Output _____no_output_____ ###Markdown Once the endpoint configuration has been constructed, it is a straightforward matter to ask SageMaker to update the existing endpoint so that it uses the new endpoint configuration.Of note here is that SageMaker does this in such a way that there is no downtime. Essentially, SageMaker deploys the new model and then updates the original endpoint so that it points to the newly deployed model. After that, the original model is shut down. This way, whatever app is using our endpoint won't notice that we've changed the model that is being used.**TODO:** Use the SageMaker Client to update the endpoint that you deployed earlier. ###Code # TODO: Update the xgb_predictor.endpoint so that it uses new_xgb_endpoint_config_name. session.sagemaker_client.update_endpoint( EndpointName=xgb_predictor.endpoint, EndpointConfigName=new_xgb_endpoint_config_name ) ###Output _____no_output_____ ###Markdown And, as is generally the case with SageMaker requests, this is being done in the background so if we want to wait for it to complete we need to call the appropriate method. ###Code session.wait_for_endpoint(xgb_predictor.endpoint) ###Output ----------------------------------------------------------------------------------------------! ###Markdown Step 7: Delete the EndpointOf course, since we are done with the deployed endpoint we need to make sure to shut it down, otherwise we will continue to be charged for it. ###Code xgb_predictor.delete_endpoint() ###Output _____no_output_____ ###Markdown Some Additional QuestionsThis notebook is a little different from the other notebooks in this module. In part, this is because it is meant to be a little bit closer to the type of problem you may face in a real world scenario. Of course, this problem is a very easy one with a prescribed solution, but there are many other interesting questions that we did not consider here and that you may wish to consider yourself.For example,- What other ways could the underlying distribution change?- Is it a good idea to re-train the model using only the new data?- What would change if the quantity of new data wasn't large. Say you only received 500 samples? Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ###Code # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir ###Output _____no_output_____
1-Lessons/Lesson03/ENGR-1330-Lesson03.ipynb
###Markdown Download this page as a jupyter notebook at [Lesson 3](http://54.243.252.9/engr-1330-webroot/1-Lessons/Lesson03/ENGR-1330-Lesson03.ipynb) ENGR 1330 Computational Thinking with Data Science Copyright © 2021 Theodore G. Cleveland and Farhang ForghanparastLast GitHub Commit Date: 13 July 2021 Lesson 3 Data Structures: - Data structures; lists, arrays, tuples, sets, dictionaries- Name, index, contents; keys--- ###Code # Script block to identify host, user, and kernel import sys ! hostname; ! whoami; ! pwd; print(sys.executable) %%html <!-- Script Block to set tables to left alignment --> <style> table {margin-left: 0 !important;} </style> ###Output _____no_output_____ ###Markdown --- Objectives1. Awareness of data structures available in Python to store and manipulate data 2. Implement arrays (lists), dictionaries, and tuples2. Address contents of lists , dictionaries, and tuples --- Data Structures and Conditional Statements**Computational thinking (CT)** concepts involved are:- `Decomposition` : Data interpretation, manipulation, and analysis of NumPy arrays- `Abstraction` : Data structures; Arrays, lists, tuples, sets, and dictionaries- `Algorithms` : Conditional statements What is a data structure?Data Structures are a specialized means of organizing and storing data in computers in such a way that we can perform operations on the stored data more efficiently.In our iPython world the structures are illustrated in the figure below![](http://54.243.252.9/engr-1330-webroot/1-Lessons/Lesson03/data-structures.png) ListsA list is a collection of data that are somehow related. It is a convenient way to refer to acollection of similar things by a single name, and using an index (like a subscript in math)to identify a particular item.Consider the "math-like" variable $x$ below:\begin{gather}x_0= 7 \\x_1= 11 \\x_2= 5 \\x_3= 9 \\x_4= 13 \\\dots \\x_N= 223 \\\end{gather} The variable name is $x$ and the subscripts correspond to different values. Thus the `value` of the variable named $x$ associated with subscript $3$ is the number $9$.The figure below is a visual representation of a the concept that treats a variable as a collection of cells. ![](http://54.243.252.9/engr-1330-webroot/1-Lessons/Lesson03/array-image.jpg)In the figure, the variable name is `MyList`, the subscripts are replaced by an indexwhich identifies which cell is being referenced. The value is the cell content at the particular index. So in the figure the value of `MyList` at Index = 3 is the number 9.'In engineering and data science we use lists a lot - we often call then vectors, arrays, matrices and such, but they are ultimately just lists.To declare a list you can write the list name and assign it values. The square brackets are used to identify that the variable is a list. Like: MyList = [7,11,5,9,13,66,99,223]One can also declare a null list and use the `append()` method to fill it as needed. MyOtherList = [ ] Python indices start at **ZERO**. A lot of other languages start at ONE. It's just the convention. The first element in a list has an index of 0, the second an index of 1, and so on.We access the contents of a list by referring to its name and index. For example MyList[3] has a value of the number 9. ArraysArrays are special lists that are used to store only elements of a specific data type, and require use of an external dependency (package) named **array**. The package is installed with core python, so other than importing it into a script nothing else special is needed.Arrays are:- Ordered: Elements in an array can be indexed- Mutable: Elements in an array can be altered![](http://54.243.252.9/engr-1330-webroot/1-Lessons/Lesson03/python-arrays-index-local.png) Data type that an array must hold is specified using the type code when it is created- ‘f’ for float- ‘d’ for double- ‘i’ for signed int- ‘I’ for unsigned intMore types are listed below|Type Code|C Data Type|Python Data Type|Minimum Size in Bytes||:---|---|---|---:||'b'| signed char|int |1||'B'| unsigned char |int |1||'h'| signed short |int |2||'H'| unsigned short |int |2||'i'| signed int |int |2||'I'| unsigned int |int |2||'l'| signed long |int |4||'L'| unsigned long |int |4||'q'| signed long long |int |8||'Q'| unsigned long long |int |8||'f'| float |float |4||'d'| double |float |8|To use arrays, a library named ‘array’ must be imported ###Code import array ###Output _____no_output_____ ###Markdown Creating an array that contains signed integer numbers ###Code myarray = array.array('i', [1, 2, 4, 8, 16, 32]) myarray[0] #1-st element, 0-th position import array as arr #import using an alias so the calls don't look so funny myarray = arr.array('i', [1, 2, 4, 8, 16, 32]) myarray[0] #1-st element, 0-th position ###Output _____no_output_____ ###Markdown Lists: Can store elements of different data types; like arrays they are (arrays are lists, but lists are not quite arrays!)- Ordered: Elements in a list can be indexed- Mutable: Elements in a list can be altered- Mathematical operations must be applied to each element of the list Tuple - A special listA tuple is a special kind of list where the **values cannot be changed** after the list is created.Such a property is called `immutable`It is useful for list-like things that are static - like days in a week, or months of a year.You declare a tuple like a list, except use round brackets instead of square brackets. MyTupleName = ("Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec") Tuples are often created as output from packages and functions.Removing individual tuple elements is not possible. There is, of course, nothing wrong with putting together another tuple with the undesired elements discarded.To explicitly remove an entire tuple, just use the del statement. ###Code a_tuple = ("a", "b") print(a_tuple) added_value = "c" added_value_tuple = (added_value,) # notice the dangling comma short_tuple =(a_tuple[1],) # notice the dangling comma new_tuple = short_tuple + added_value_tuple del(a_tuple) # kill the original print(new_tuple) ###Output ('a', 'b') ('b', 'c') ###Markdown Dictionary - A special listA dictionary is a special kind of list where the items are related data `PAIRS`. It is a lot like a relational database (it probably is one in fact) where the first item in the pair is called the key, and must be unique in a dictionary, and the second item in the pair is the data.The second item could itself be a list, so a dictionary would be a meaningful way to build adatabase in Python.To declare a dictionary using `curly` brackets MyPetsNamesAndMass = { "Dusty":7.8 , "Aspen":6.3, "Merrimee":0.03}To declare a dictionary using the `dict()` method MyPetsNamesAndMassToo = dict(Dusty = 7.8 , Aspen = 6.3, Merrimee = 0.03) Dictionary properties- Unordered: Elements in a dictionary cannot be- Mutable elements: Elements in a dictionary can be altered- Immutable keys: Keys in a dictionary cannot be altered ###Code MyPetsNamesAndMass = { "Dusty":7.8 , "Aspen":6.3, "Merrimee":0.03} MyPetsNamesAndMassToo = dict(Dusty = 7.8 , Aspen = 6.3, Merrimee = 0.03) print(MyPetsNamesAndMass) print(MyPetsNamesAndMassToo) ###Output {'Dusty': 7.8, 'Aspen': 6.3, 'Merrimee': 0.03} {'Dusty': 7.8, 'Aspen': 6.3, 'Merrimee': 0.03} ###Markdown Sets - A special listSets: Are used to store elements of different data types- Unordered: Elements in a set cannot be indexed- Mutable: Elements in a set can be altered- Non-repetition: Elements in a set are uniqueElements of a set are enclosed in curly brackets { }- Creating sets that contains different data types- Sets cannot be nested Example of a DictionaryA dictionary, using natural numbers as keys ###Code myset = {1:'one',2:'two',3:{1:'one',2:'two',3:'seven of nine'}} type(myset) (myset.get(3)).get(3) # get element from key 3 of key 3 set ###Output _____no_output_____ ###Markdown Example of a Set (no explicit keys)A set, three elements, no explicit keys ###Code myset = {1,2,77} type(myset) ###Output _____no_output_____ ###Markdown Another set ###Code urset={'apple','cat','rock',77,'sunset strip'} type(urset) ###Output _____no_output_____ ###Markdown Union and Intersection of two sets - Union joins all unique elements (null return only if all sets are empty)- Intersection extracts all common elements (null returns possible) ###Code # union (join) sets print('union is : ' ,myset | urset) # intersection of sets (shared elements) print('intersection is : ' ,myset & urset) ###Output union is : {1, 2, 'rock', 77, 'apple', 'cat', 'sunset strip'} intersection is : {77} ###Markdown Set constructor method is another way to create a set. ###Code thisset = set(("apple", "banana", "cherry")) # note the double round-brackets print(thisset) ###Output {'banana', 'apple', 'cherry'} ###Markdown What's the difference between a set and dictionary? A set is like a dictionary where the keys themselves are the values; the keys are unique (duplicates are not allowed). You can construct sets with duplicates and the constructor will drop duplicates - try it with the first set above.Another comparison from https://stackoverflow.com/questions/34370599/difference-between-dict-and-set-python is"Well, a set is like a dict with keys but no values, and they're both implemented using a hash table. But yes, it's a little annoying that the {} notation denotes an empty dict rather than an empty set, but that's a historical artifact."In the example below, we look at empty versions of each. ###Code # Empty dictionary webster = {} print(type(webster)) # Empty set empty = set(()) print(type(empty)) ###Output <class 'dict'> <class 'set'>
Intro to TensorFlow for Deep Learning/08_08_forecasting_with_lstm.ipynb
###Markdown Copyright 2018 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Forecasting with an LSTM Run in Google Colab View source on GitHub Setup ###Code from __future__ import absolute_import, division, print_function, unicode_literals try: # Use the %tensorflow_version magic if in colab. %tensorflow_version 2.x except Exception: pass import numpy as np import matplotlib.pyplot as plt import tensorflow as tf keras = tf.keras def plot_series(time, series, format="-", start=0, end=None, label=None): plt.plot(time[start:end], series[start:end], format, label=label) plt.xlabel("Time") plt.ylabel("Value") if label: plt.legend(fontsize=14) plt.grid(True) def trend(time, slope=0): return slope * time def seasonal_pattern(season_time): """Just an arbitrary pattern, you can change it if you wish""" return np.where(season_time < 0.4, np.cos(season_time * 2 * np.pi), 1 / np.exp(3 * season_time)) def seasonality(time, period, amplitude=1, phase=0): """Repeats the same pattern at each period""" season_time = ((time + phase) % period) / period return amplitude * seasonal_pattern(season_time) def white_noise(time, noise_level=1, seed=None): rnd = np.random.RandomState(seed) return rnd.randn(len(time)) * noise_level def sequential_window_dataset(series, window_size): series = tf.expand_dims(series, axis=-1) ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size + 1, shift=window_size, drop_remainder=True) ds = ds.flat_map(lambda window: window.batch(window_size + 1)) ds = ds.map(lambda window: (window[:-1], window[1:])) return ds.batch(1).prefetch(1) time = np.arange(4 * 365 + 1) slope = 0.05 baseline = 10 amplitude = 40 series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude) noise_level = 5 noise = white_noise(time, noise_level, seed=42) series += noise plt.figure(figsize=(10, 6)) plot_series(time, series) plt.show() split_time = 1000 time_train = time[:split_time] x_train = series[:split_time] time_valid = time[split_time:] x_valid = series[split_time:] class ResetStatesCallback(keras.callbacks.Callback): def on_epoch_begin(self, epoch, logs): self.model.reset_states() ###Output _____no_output_____ ###Markdown LSTM RNN Forecasting ###Code keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) window_size = 30 train_set = sequential_window_dataset(x_train, window_size) model = keras.models.Sequential([ keras.layers.LSTM(100, return_sequences=True, stateful=True, batch_input_shape=[1, None, 1]), keras.layers.LSTM(100, return_sequences=True, stateful=True), keras.layers.Dense(1), keras.layers.Lambda(lambda x: x * 200.0) ]) lr_schedule = keras.callbacks.LearningRateScheduler( lambda epoch: 1e-8 * 10**(epoch / 20)) reset_states = ResetStatesCallback() optimizer = keras.optimizers.SGD(lr=1e-8, momentum=0.9) model.compile(loss=keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(train_set, epochs=100, callbacks=[lr_schedule, reset_states]) plt.semilogx(history.history["lr"], history.history["loss"]) plt.axis([1e-8, 1e-4, 0, 30]) keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) window_size = 30 train_set = sequential_window_dataset(x_train, window_size) valid_set = sequential_window_dataset(x_valid, window_size) model = keras.models.Sequential([ keras.layers.LSTM(100, return_sequences=True, stateful=True, batch_input_shape=[1, None, 1]), keras.layers.LSTM(100, return_sequences=True, stateful=True), keras.layers.Dense(1), keras.layers.Lambda(lambda x: x * 200.0) ]) optimizer = keras.optimizers.SGD(lr=5e-7, momentum=0.9) model.compile(loss=keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) reset_states = ResetStatesCallback() model_checkpoint = keras.callbacks.ModelCheckpoint( "my_checkpoint.h5", save_best_only=True) early_stopping = keras.callbacks.EarlyStopping(patience=50) model.fit(train_set, epochs=500, validation_data=valid_set, callbacks=[early_stopping, model_checkpoint, reset_states]) model = keras.models.load_model("my_checkpoint.h5") rnn_forecast = model.predict(series[np.newaxis, :, np.newaxis]) rnn_forecast = rnn_forecast[0, split_time - 1:-1, 0] plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, rnn_forecast) keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy() ###Output _____no_output_____
C1/W4/ungraded_labs/C1_W4_Lab_1_image_generator_no_validation.ipynb
###Markdown Ungraded Lab: Training with ImageDataGeneratorIn this lab, you will build a train a model on the [Horses or Humans](https://www.tensorflow.org/datasets/catalog/horses_or_humans) dataset. This contains over a thousand images of horses and humans with varying poses and filesizes. You will use the [ImageDataGenerator](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) class to prepare this dataset so it can be fed to a convolutional neural network.**IMPORTANT NOTE:** This notebook is designed to run as a Colab. Running it on your local machine might result in some of the code blocks throwing errors. Run the code below to download the compressed dataset `horse-or-human.zip`. ###Code !gdown --id 1onaG42NZft3wCE1WH0GDEbUhu75fedP5 ###Output _____no_output_____ ###Markdown You can then unzip the archive using the [zipfile](https://docs.python.org/3/library/zipfile.html) module. ###Code import zipfile # Unzip the dataset local_zip = './horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('./horse-or-human') zip_ref.close() ###Output _____no_output_____ ###Markdown The contents of the .zip are extracted to the base directory `./horse-or-human`, which in turn each contain `horses` and `humans` subdirectories.In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like' and 'this is what a human looks like'.One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. You will use the ImageDataGenerator API instead -- and this is coded to automatically label images according to the directory names and structure. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. `ImageDataGenerator` will label the images appropriately for you, reducing a coding step. You can now define each of these directories: ###Code import os # Directory with our training horse pictures train_horse_dir = os.path.join('./horse-or-human/horses') # Directory with our training human pictures train_human_dir = os.path.join('./horse-or-human/humans') ###Output _____no_output_____ ###Markdown Now see what the filenames look like in the `horses` and `humans` training directories: ###Code train_horse_names = os.listdir(train_horse_dir) print(train_horse_names[:10]) train_human_names = os.listdir(train_human_dir) print(train_human_names[:10]) ###Output _____no_output_____ ###Markdown You can also find out the total number of horse and human images in the directories: ###Code print('total training horse images:', len(os.listdir(train_horse_dir))) print('total training human images:', len(os.listdir(train_human_dir))) ###Output _____no_output_____ ###Markdown Now take a look at a few pictures to get a better sense of what they look like. First, configure the `matplotlib` parameters: ###Code %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg # Parameters for our graph; we'll output images in a 4x4 configuration nrows = 4 ncols = 4 # Index for iterating over images pic_index = 0 ###Output _____no_output_____ ###Markdown Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time: ###Code # Set up matplotlib fig, and size it to fit 4x4 pics fig = plt.gcf() fig.set_size_inches(ncols * 4, nrows * 4) pic_index += 8 next_horse_pix = [os.path.join(train_horse_dir, fname) for fname in train_horse_names[pic_index-8:pic_index]] next_human_pix = [os.path.join(train_human_dir, fname) for fname in train_human_names[pic_index-8:pic_index]] for i, img_path in enumerate(next_horse_pix+next_human_pix): # Set up subplot; subplot indices start at 1 sp = plt.subplot(nrows, ncols, i + 1) sp.axis('Off') # Don't show axes (or gridlines) img = mpimg.imread(img_path) plt.imshow(img) plt.show() ###Output _____no_output_____ ###Markdown Building a Small Model from ScratchNow you can define the model architecture that you will train.Step 1 will be to import tensorflow. ###Code import tensorflow as tf ###Output _____no_output_____ ###Markdown You then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers. Note that because this is a two-class classification problem, i.e. a *binary classification problem*, you will end your network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function). This makes the output value of your network a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0). ###Code model = tf.keras.models.Sequential([ # Note the input shape is the desired size of the image 300x300 with 3 bytes color # This is the first convolution tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)), tf.keras.layers.MaxPooling2D(2, 2), # The second convolution tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The third convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fourth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fifth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # Flatten the results to feed into a DNN tf.keras.layers.Flatten(), # 512 neuron hidden layer tf.keras.layers.Dense(512, activation='relu'), # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans') tf.keras.layers.Dense(1, activation='sigmoid') ]) ###Output _____no_output_____ ###Markdown You can review the network architecture and the output shapes with `model.summary()`. ###Code model.summary() ###Output _____no_output_____ ###Markdown The "output shape" column shows how the size of your feature map evolves in each successive layer. As you saw in an earlier lesson, the convolution layers removes the outermost pixels of the image, and each pooling layer halves the dimensions. Next, you'll configure the specifications for model training. You will train the model with the [`binary_crossentropy`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy) loss because it's a binary classification problem, and the final activation is a sigmoid. (For a refresher on loss metrics, see this [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) You will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, you will want to monitor classification accuracy.**NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descentRMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descentAdam) and [Adagrad](https://developers.google.com/machine-learning/glossary/AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.) ###Code from tensorflow.keras.optimizers import RMSprop model.compile(loss='binary_crossentropy', optimizer=RMSprop(learning_rate=0.001), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Data PreprocessingNext step is to set up the data generators that will read pictures in the source folders, convert them to `float32` tensors, and feed them (with their labels) to the model. You'll have one generator for the training images and one for the validation images. These generators will yield batches of images of size 300x300 and their labels (binary).As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network (i.e. It is uncommon to feed raw pixels into a ConvNet.) In this case, you will preprocess the images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).In Keras, this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`. ###Code from tensorflow.keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1/255) # Flow training images in batches of 128 using train_datagen generator train_generator = train_datagen.flow_from_directory( './horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 300x300 batch_size=128, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') ###Output _____no_output_____ ###Markdown TrainingYou can start training for 15 epochs -- this may take a few minutes to run.Do note the values per epoch.The `loss` and `accuracy` are great indicators of progress in training. `loss` measures the current model prediction against the known labels, calculating the result. `accuracy`, on the other hand, is the portion of correct guesses. ###Code history = model.fit( train_generator, steps_per_epoch=8, epochs=15, verbose=1) ###Output _____no_output_____ ###Markdown Model PredictionNow take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, upload them, and run them through the model, giving an indication of whether the object is a horse or a human.**Important Note:** Due to some compatibility issues, the following code block will result in an error after you select the images(s) to upload if you are running this notebook as a `Colab` on the `Safari` browser. For all other browsers, continue with the next code block and ignore the next one after it._For Safari users: please comment out or skip the code block below, uncomment the next code block and run it._ ###Code ## CODE BLOCK FOR NON-SAFARI BROWSERS ## SAFARI USERS: PLEASE SKIP THIS BLOCK AND RUN THE NEXT ONE INSTEAD import numpy as np from google.colab import files from keras.preprocessing import image uploaded = files.upload() for fn in uploaded.keys(): # predicting images path = '/content/' + fn img = image.load_img(path, target_size=(300, 300)) x = image.img_to_array(img) x /= 255 x = np.expand_dims(x, axis=0) images = np.vstack([x]) classes = model.predict(images, batch_size=10) print(classes[0]) if classes[0]>0.5: print(fn + " is a human") else: print(fn + " is a horse") ###Output _____no_output_____ ###Markdown `Safari` users will need to upload the images(s) manually in their workspace. Please follow the instructions, uncomment the code block below and run it.Instructions on how to upload image(s) manually in a Colab:1. Select the `folder` icon on the left `menu bar`.2. Click on the `folder with an arrow pointing upwards` named `..`3. Click on the `folder` named `tmp`.4. Inside of the `tmp` folder, `create a new folder` called `images`. You'll see the `New folder` option by clicking the `3 vertical dots` menu button next to the `tmp` folder.5. Inside of the new `images` folder, upload an image(s) of your choice, preferably of either a horse or a human. Drag and drop the images(s) on top of the `images` folder.6. Uncomment and run the code block below. ###Code # # CODE BLOCK FOR SAFARI USERS # import numpy as np # from keras.preprocessing import image # import os # images = os.listdir("/tmp/images") # print(images) # for i in images: # print() # # predicting images # path = '/tmp/images/' + i # img = image.load_img(path, target_size=(300, 300)) # x = image.img_to_array(img) # x /= 255 # x = np.expand_dims(x, axis=0) # images = np.vstack([x]) # classes = model.predict(images, batch_size=10) # print(classes[0]) # if classes[0]>0.5: # print(i + " is a human") # else: # print(i + " is a horse") ###Output _____no_output_____ ###Markdown Visualizing Intermediate RepresentationsTo get a feel for what kind of features your CNN has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the model.You can pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images. ###Code import numpy as np import random from tensorflow.keras.preprocessing.image import img_to_array, load_img # Define a new Model that will take an image as input, and will output # intermediate representations for all layers in the previous model after # the first. successive_outputs = [layer.output for layer in model.layers[1:]] visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs) # Prepare a random input image from the training set. horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names] human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names] img_path = random.choice(horse_img_files + human_img_files) img = load_img(img_path, target_size=(300, 300)) # this is a PIL image x = img_to_array(img) # Numpy array with shape (300, 300, 3) x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 300, 300, 3) # Scale by 1/255 x /= 255 # Run the image through the network, thus obtaining all # intermediate representations for this image. successive_feature_maps = visualization_model.predict(x) # These are the names of the layers, so you can have them as part of the plot layer_names = [layer.name for layer in model.layers[1:]] # Display the representations for layer_name, feature_map in zip(layer_names, successive_feature_maps): if len(feature_map.shape) == 4: # Just do this for the conv / maxpool layers, not the fully-connected layers n_features = feature_map.shape[-1] # number of features in feature map # The feature map has shape (1, size, size, n_features) size = feature_map.shape[1] # Tile the images in this matrix display_grid = np.zeros((size, size * n_features)) for i in range(n_features): x = feature_map[0, :, :, i] x -= x.mean() x /= x.std() x *= 64 x += 128 x = np.clip(x, 0, 255).astype('uint8') # Tile each filter into this big horizontal grid display_grid[:, i * size : (i + 1) * size] = x # Display the grid scale = 20. / n_features plt.figure(figsize=(scale * n_features, scale)) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') ###Output _____no_output_____ ###Markdown You can see above how the pixels highlighted turn to increasingly abstract and compact representations, especially at the bottom grid. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being "activated"; most are set to zero. This is called _representation sparsity_ and is a key feature of deep learning. These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline wherein each layer filters out the most useful features. Clean UpYou will continue with a similar exercise in the next lab but before that, run the following cell to terminate the kernel and free memory resources: ###Code import os, signal os.kill(os.getpid(), signal.SIGKILL) ###Output _____no_output_____ ###Markdown Ungraded Lab: Training with ImageDataGeneratorIn this lab, you will build a train a model on the [Horses or Humans](https://www.tensorflow.org/datasets/catalog/horses_or_humans) dataset. This contains over a thousand images of horses and humans with varying poses and filesizes. You will use the [ImageDataGenerator](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) class to prepare this dataset so it can be fed to a convolutional neural network.**IMPORTANT NOTE:** This notebook is designed to run as a Colab. Running it on your local machine might result in some of the code blocks throwing errors. Run the code below to download the compressed dataset `horse-or-human.zip`. ###Code !gdown --id 1onaG42NZft3wCE1WH0GDEbUhu75fedP5 ###Output _____no_output_____ ###Markdown You can then unzip the archive using the [zipfile](https://docs.python.org/3/library/zipfile.html) module. ###Code import zipfile # Unzip the dataset local_zip = './horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('./horse-or-human') zip_ref.close() ###Output _____no_output_____ ###Markdown The contents of the .zip are extracted to the base directory `./horse-or-human`, which in turn each contain `horses` and `humans` subdirectories.In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like' and 'this is what a human looks like'.One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. You will use the ImageDataGenerator API instead -- and this is coded to automatically label images according to the directory names and structure. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. `ImageDataGenerator` will label the images appropriately for you, reducing a coding step. You can now define each of these directories: ###Code import os # Directory with our training horse pictures train_horse_dir = os.path.join('./horse-or-human/horses') # Directory with our training human pictures train_human_dir = os.path.join('./horse-or-human/humans') ###Output _____no_output_____ ###Markdown Now see what the filenames look like in the `horses` and `humans` training directories: ###Code train_horse_names = os.listdir(train_horse_dir) print(train_horse_names[:10]) train_human_names = os.listdir(train_human_dir) print(train_human_names[:10]) ###Output _____no_output_____ ###Markdown You can also find out the total number of horse and human images in the directories: ###Code print('total training horse images:', len(os.listdir(train_horse_dir))) print('total training human images:', len(os.listdir(train_human_dir))) ###Output _____no_output_____ ###Markdown Now take a look at a few pictures to get a better sense of what they look like. First, configure the `matplotlib` parameters: ###Code %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg # Parameters for our graph; we'll output images in a 4x4 configuration nrows = 4 ncols = 4 # Index for iterating over images pic_index = 0 ###Output _____no_output_____ ###Markdown Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time: ###Code # Set up matplotlib fig, and size it to fit 4x4 pics fig = plt.gcf() fig.set_size_inches(ncols * 4, nrows * 4) pic_index += 8 next_horse_pix = [os.path.join(train_horse_dir, fname) for fname in train_horse_names[pic_index-8:pic_index]] next_human_pix = [os.path.join(train_human_dir, fname) for fname in train_human_names[pic_index-8:pic_index]] for i, img_path in enumerate(next_horse_pix+next_human_pix): # Set up subplot; subplot indices start at 1 sp = plt.subplot(nrows, ncols, i + 1) sp.axis('Off') # Don't show axes (or gridlines) img = mpimg.imread(img_path) plt.imshow(img) plt.show() ###Output _____no_output_____ ###Markdown Building a Small Model from ScratchNow you can define the model architecture that you will train.Step 1 will be to import tensorflow. ###Code import tensorflow as tf ###Output _____no_output_____ ###Markdown You then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers. Note that because this is a two-class classification problem, i.e. a *binary classification problem*, you will end your network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function). This makes the output value of your network a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0). ###Code model = tf.keras.models.Sequential([ # Note the input shape is the desired size of the image 300x300 with 3 bytes color # This is the first convolution tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)), tf.keras.layers.MaxPooling2D(2, 2), # The second convolution tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The third convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fourth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fifth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # Flatten the results to feed into a DNN tf.keras.layers.Flatten(), # 512 neuron hidden layer tf.keras.layers.Dense(512, activation='relu'), # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans') tf.keras.layers.Dense(1, activation='sigmoid') ]) ###Output _____no_output_____ ###Markdown You can review the network architecture and the output shapes with `model.summary()`. ###Code model.summary() ###Output _____no_output_____ ###Markdown The "output shape" column shows how the size of your feature map evolves in each successive layer. As you saw in an earlier lesson, the convolution layers removes the outermost pixels of the image, and each pooling layer halves the dimensions. Next, you'll configure the specifications for model training. You will train the model with the [`binary_crossentropy`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy) loss because it's a binary classification problem, and the final activation is a sigmoid. (For a refresher on loss metrics, see this [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) You will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, you will want to monitor classification accuracy.**NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descentRMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descentAdam) and [Adagrad](https://developers.google.com/machine-learning/glossary/AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.) ###Code from tensorflow.keras.optimizers import RMSprop model.compile(loss='binary_crossentropy', optimizer=RMSprop(learning_rate=0.001), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Data PreprocessingNext step is to set up the data generators that will read pictures in the source folders, convert them to `float32` tensors, and feed them (with their labels) to the model. You'll have one generator for the training images and one for the validation images. These generators will yield batches of images of size 300x300 and their labels (binary).As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network (i.e. It is uncommon to feed raw pixels into a ConvNet.) In this case, you will preprocess the images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).In Keras, this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`. ###Code from tensorflow.keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1/255) # Flow training images in batches of 128 using train_datagen generator train_generator = train_datagen.flow_from_directory( './horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 300x300 batch_size=128, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') ###Output _____no_output_____ ###Markdown TrainingYou can start training for 15 epochs -- this may take a few minutes to run.Do note the values per epoch.The `loss` and `accuracy` are great indicators of progress in training. `loss` measures the current model prediction against the known labels, calculating the result. `accuracy`, on the other hand, is the portion of correct guesses. ###Code history = model.fit( train_generator, steps_per_epoch=8, # There are 1,024 images in the training directory, so we're loading them in 128 at a time. So in order to load them all, we need to do 8 batches. epochs=15, # the validation set that comes from the validation_generator had 256 images, and we wanted to handle them in batches of 32, so we will do 8 steps. verbose=1 # we'll get a little less animation hiding the epoch progress ) ###Output _____no_output_____ ###Markdown Model PredictionNow take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, upload them, and run them through the model, giving an indication of whether the object is a horse or a human.**Important Note:** Due to some compatibility issues, the following code block will result in an error after you select the images(s) to upload if you are running this notebook as a `Colab` on the `Safari` browser. For all other browsers, continue with the next code block and ignore the next one after it._For Safari users: please comment out or skip the code block below, uncomment the next code block and run it._ ###Code ## CODE BLOCK FOR NON-SAFARI BROWSERS ## SAFARI USERS: PLEASE SKIP THIS BLOCK AND RUN THE NEXT ONE INSTEAD import numpy as np from google.colab import files from keras.preprocessing import image uploaded = files.upload() for fn in uploaded.keys(): # predicting images path = '/content/' + fn img = image.load_img(path, target_size=(300, 300)) x = image.img_to_array(img) x /= 255 x = np.expand_dims(x, axis=0) images = np.vstack([x]) classes = model.predict(images, batch_size=10) print(classes[0]) if classes[0]>0.5: print(fn + " is a human") else: print(fn + " is a horse") ###Output _____no_output_____ ###Markdown `Safari` users will need to upload the images(s) manually in their workspace. Please follow the instructions, uncomment the code block below and run it.Instructions on how to upload image(s) manually in a Colab:1. Select the `folder` icon on the left `menu bar`.2. Click on the `folder with an arrow pointing upwards` named `..`3. Click on the `folder` named `tmp`.4. Inside of the `tmp` folder, `create a new folder` called `images`. You'll see the `New folder` option by clicking the `3 vertical dots` menu button next to the `tmp` folder.5. Inside of the new `images` folder, upload an image(s) of your choice, preferably of either a horse or a human. Drag and drop the images(s) on top of the `images` folder.6. Uncomment and run the code block below. ###Code # # CODE BLOCK FOR SAFARI USERS # import numpy as np # from keras.preprocessing import image # import os # images = os.listdir("/tmp/images") # print(images) # for i in images: # print() # # predicting images # path = '/tmp/images/' + i # img = image.load_img(path, target_size=(300, 300)) # x = image.img_to_array(img) # x /= 255 # x = np.expand_dims(x, axis=0) # images = np.vstack([x]) # classes = model.predict(images, batch_size=10) # print(classes[0]) # if classes[0]>0.5: # print(i + " is a human") # else: # print(i + " is a horse") ###Output _____no_output_____ ###Markdown Visualizing Intermediate RepresentationsTo get a feel for what kind of features your CNN has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the model.You can pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images. ###Code import numpy as np import random from tensorflow.keras.preprocessing.image import img_to_array, load_img # Define a new Model that will take an image as input, and will output # intermediate representations for all layers in the previous model after # the first. successive_outputs = [layer.output for layer in model.layers[1:]] visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs) # Prepare a random input image from the training set. horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names] human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names] img_path = random.choice(horse_img_files + human_img_files) img = load_img(img_path, target_size=(300, 300)) # this is a PIL image x = img_to_array(img) # Numpy array with shape (300, 300, 3) x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 300, 300, 3) # Scale by 1/255 x /= 255 # Run the image through the network, thus obtaining all # intermediate representations for this image. successive_feature_maps = visualization_model.predict(x) # These are the names of the layers, so you can have them as part of the plot layer_names = [layer.name for layer in model.layers[1:]] # Display the representations for layer_name, feature_map in zip(layer_names, successive_feature_maps): if len(feature_map.shape) == 4: # Just do this for the conv / maxpool layers, not the fully-connected layers n_features = feature_map.shape[-1] # number of features in feature map # The feature map has shape (1, size, size, n_features) size = feature_map.shape[1] # Tile the images in this matrix display_grid = np.zeros((size, size * n_features)) for i in range(n_features): x = feature_map[0, :, :, i] x -= x.mean() x /= x.std() x *= 64 x += 128 x = np.clip(x, 0, 255).astype('uint8') # Tile each filter into this big horizontal grid display_grid[:, i * size : (i + 1) * size] = x # Display the grid scale = 20. / n_features plt.figure(figsize=(scale * n_features, scale)) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') ###Output _____no_output_____ ###Markdown You can see above how the pixels highlighted turn to increasingly abstract and compact representations, especially at the bottom grid. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being "activated"; most are set to zero. This is called _representation sparsity_ and is a key feature of deep learning. These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline wherein each layer filters out the most useful features. Clean UpYou will continue with a similar exercise in the next lab but before that, run the following cell to terminate the kernel and free memory resources: ###Code import os, signal os.kill(os.getpid(), signal.SIGKILL) ###Output _____no_output_____ ###Markdown Ungraded Lab: Training with ImageDataGeneratorIn this lab, you will build a train a model on the [Horses or Humans](https://www.tensorflow.org/datasets/catalog/horses_or_humans) dataset. This contains over a thousand images of horses and humans with varying poses and filesizes. You will use the [ImageDataGenerator](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) class to prepare this dataset so it can be fed to a convolutional neural network.**IMPORTANT NOTE:** This notebook is designed to run as a Colab. Running it on your local machine might result in some of the code blocks throwing errors. Run the code below to download the compressed dataset `horse-or-human.zip`. ###Code !gdown --id 1onaG42NZft3wCE1WH0GDEbUhu75fedP5 ###Output _____no_output_____ ###Markdown You can then unzip the archive using the [zipfile](https://docs.python.org/3/library/zipfile.html) module. ###Code import zipfile # Unzip the dataset local_zip = './horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('./horse-or-human') zip_ref.close() ###Output _____no_output_____ ###Markdown The contents of the .zip are extracted to the base directory `./horse-or-human`, which in turn each contain `horses` and `humans` subdirectories.In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like' and 'this is what a human looks like'.One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. You will use the ImageDataGenerator API instead -- and this is coded to automatically label images according to the directory names and structure. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. `ImageDataGenerator` will label the images appropriately for you, reducing a coding step. You can now define each of these directories: ###Code import os # Directory with our training horse pictures train_horse_dir = os.path.join('./horse-or-human/horses') # Directory with our training human pictures train_human_dir = os.path.join('./horse-or-human/humans') ###Output _____no_output_____ ###Markdown Now see what the filenames look like in the `horses` and `humans` training directories: ###Code train_horse_names = os.listdir(train_horse_dir) print(train_horse_names[:10]) train_human_names = os.listdir(train_human_dir) print(train_human_names[:10]) ###Output _____no_output_____ ###Markdown You can also find out the total number of horse and human images in the directories: ###Code print('total training horse images:', len(os.listdir(train_horse_dir))) print('total training human images:', len(os.listdir(train_human_dir))) ###Output _____no_output_____ ###Markdown Now take a look at a few pictures to get a better sense of what they look like. First, configure the `matplotlib` parameters: ###Code %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg # Parameters for our graph; we'll output images in a 4x4 configuration nrows = 4 ncols = 4 # Index for iterating over images pic_index = 0 ###Output _____no_output_____ ###Markdown Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time: ###Code # Set up matplotlib fig, and size it to fit 4x4 pics fig = plt.gcf() fig.set_size_inches(ncols * 4, nrows * 4) pic_index += 8 next_horse_pix = [os.path.join(train_horse_dir, fname) for fname in train_horse_names[pic_index-8:pic_index]] next_human_pix = [os.path.join(train_human_dir, fname) for fname in train_human_names[pic_index-8:pic_index]] for i, img_path in enumerate(next_horse_pix+next_human_pix): # Set up subplot; subplot indices start at 1 sp = plt.subplot(nrows, ncols, i + 1) sp.axis('Off') # Don't show axes (or gridlines) img = mpimg.imread(img_path) plt.imshow(img) plt.show() ###Output _____no_output_____ ###Markdown Building a Small Model from ScratchNow you can define the model architecture that you will train.Step 1 will be to import tensorflow. ###Code import tensorflow as tf class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch: int, logs={}): if logs.get("accuracy") > 0.95: self.model.stop_training = True callbacks = myCallback() ###Output _____no_output_____ ###Markdown You then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers. Note that because this is a two-class classification problem, i.e. a *binary classification problem*, you will end your network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function). This makes the output value of your network a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0). ###Code model = tf.keras.models.Sequential([ # Note the input shape is the desired size of the image 300x300 with 3 bytes color # This is the first convolution tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)), tf.keras.layers.MaxPooling2D(2, 2), # The second convolution tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The third convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fourth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fifth convolution # Flatten the results to feed into a DNN tf.keras.layers.Flatten(), # 512 neuron hidden layer tf.keras.layers.Dense(512, activation='relu'), # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans') tf.keras.layers.Dense(1, activation='sigmoid') ]) ###Output _____no_output_____ ###Markdown You can review the network architecture and the output shapes with `model.summary()`. ###Code model.summary() ###Output _____no_output_____ ###Markdown The "output shape" column shows how the size of your feature map evolves in each successive layer. As you saw in an earlier lesson, the convolution layers removes the outermost pixels of the image, and each pooling layer halves the dimensions. Next, you'll configure the specifications for model training. You will train the model with the [`binary_crossentropy`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy) loss because it's a binary classification problem, and the final activation is a sigmoid. (For a refresher on loss metrics, see this [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) You will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, you will want to monitor classification accuracy.**NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descentRMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descentAdam) and [Adagrad](https://developers.google.com/machine-learning/glossary/AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.) ###Code from tensorflow.keras.optimizers import RMSprop model.compile(loss='binary_crossentropy', optimizer=RMSprop(learning_rate=0.001), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Data PreprocessingNext step is to set up the data generators that will read pictures in the source folders, convert them to `float32` tensors, and feed them (with their labels) to the model. You'll have one generator for the training images and one for the validation images. These generators will yield batches of images of size 300x300 and their labels (binary).As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network (i.e. It is uncommon to feed raw pixels into a ConvNet.) In this case, you will preprocess the images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).In Keras, this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`. ###Code from tensorflow.keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1/255) # Flow training images in batches of 128 using train_datagen generator train_generator = train_datagen.flow_from_directory( './horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 300x300 batch_size=128, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') ###Output _____no_output_____ ###Markdown TrainingYou can start training for 15 epochs -- this may take a few minutes to run.Do note the values per epoch.The `loss` and `accuracy` are great indicators of progress in training. `loss` measures the current model prediction against the known labels, calculating the result. `accuracy`, on the other hand, is the portion of correct guesses. ###Code history = model.fit( train_generator, steps_per_epoch=8, epochs=10, verbose=2, callbacks=[callbacks]) ###Output _____no_output_____ ###Markdown Model PredictionNow take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, upload them, and run them through the model, giving an indication of whether the object is a horse or a human.**Important Note:** Due to some compatibility issues, the following code block will result in an error after you select the images(s) to upload if you are running this notebook as a `Colab` on the `Safari` browser. For all other browsers, continue with the next code block and ignore the next one after it._For Safari users: please comment out or skip the code block below, uncomment the next code block and run it._ ###Code ## CODE BLOCK FOR NON-SAFARI BROWSERS ## SAFARI USERS: PLEASE SKIP THIS BLOCK AND RUN THE NEXT ONE INSTEAD import numpy as np from google.colab import files from keras.preprocessing import image uploaded = files.upload() for fn in uploaded.keys(): # predicting images path = '/content/' + fn img = image.load_img(path, target_size=(300, 300)) x = image.img_to_array(img) x /= 255 x = np.expand_dims(x, axis=0) images = np.vstack([x]) classes = model.predict(images, batch_size=10) print(classes[0]) if classes[0]>0.5: print(fn + " is a human") else: print(fn + " is a horse") ###Output _____no_output_____ ###Markdown `Safari` users will need to upload the images(s) manually in their workspace. Please follow the instructions, uncomment the code block below and run it.Instructions on how to upload image(s) manually in a Colab:1. Select the `folder` icon on the left `menu bar`.2. Click on the `folder with an arrow pointing upwards` named `..`3. Click on the `folder` named `tmp`.4. Inside of the `tmp` folder, `create a new folder` called `images`. You'll see the `New folder` option by clicking the `3 vertical dots` menu button next to the `tmp` folder.5. Inside of the new `images` folder, upload an image(s) of your choice, preferably of either a horse or a human. Drag and drop the images(s) on top of the `images` folder.6. Uncomment and run the code block below. ###Code # # CODE BLOCK FOR SAFARI USERS # import numpy as np # from keras.preprocessing import image # import os # images = os.listdir("/tmp/images") # print(images) # for i in images: # print() # # predicting images # path = '/tmp/images/' + i # img = image.load_img(path, target_size=(300, 300)) # x = image.img_to_array(img) # x /= 255 # x = np.expand_dims(x, axis=0) # images = np.vstack([x]) # classes = model.predict(images, batch_size=10) # print(classes[0]) # if classes[0]>0.5: # print(i + " is a human") # else: # print(i + " is a horse") ###Output _____no_output_____ ###Markdown Visualizing Intermediate RepresentationsTo get a feel for what kind of features your CNN has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the model.You can pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images. ###Code import numpy as np import random from tensorflow.keras.preprocessing.image import img_to_array, load_img # Define a new Model that will take an image as input, and will output # intermediate representations for all layers in the previous model after # the first. successive_outputs = [layer.output for layer in model.layers[1:]] visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs) # Prepare a random input image from the training set. horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names] human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names] img_path = random.choice(horse_img_files + human_img_files) img = load_img(img_path, target_size=(300, 300)) # this is a PIL image x = img_to_array(img) # Numpy array with shape (300, 300, 3) x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 300, 300, 3) # Scale by 1/255 x /= 255 # Run the image through the network, thus obtaining all # intermediate representations for this image. successive_feature_maps = visualization_model.predict(x) # These are the names of the layers, so you can have them as part of the plot layer_names = [layer.name for layer in model.layers[1:]] # Display the representations for layer_name, feature_map in zip(layer_names, successive_feature_maps): if len(feature_map.shape) == 4: # Just do this for the conv / maxpool layers, not the fully-connected layers n_features = feature_map.shape[-1] # number of features in feature map # The feature map has shape (1, size, size, n_features) size = feature_map.shape[1] # Tile the images in this matrix display_grid = np.zeros((size, size * n_features)) for i in range(n_features): x = feature_map[0, :, :, i] x -= x.mean() x /= x.std() x *= 64 x += 128 x = np.clip(x, 0, 255).astype('uint8') # Tile each filter into this big horizontal grid display_grid[:, i * size : (i + 1) * size] = x # Display the grid scale = 20. / n_features plt.figure(figsize=(scale * n_features, scale)) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') ###Output _____no_output_____ ###Markdown You can see above how the pixels highlighted turn to increasingly abstract and compact representations, especially at the bottom grid. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being "activated"; most are set to zero. This is called _representation sparsity_ and is a key feature of deep learning. These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline wherein each layer filters out the most useful features. Clean UpYou will continue with a similar exercise in the next lab but before that, run the following cell to terminate the kernel and free memory resources: ###Code import os, signal os.kill(os.getpid(), signal.SIGKILL) ###Output _____no_output_____ ###Markdown Copyright 2019 The TensorFlow Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown **IMPORTANT NOTE:** This notebook is designed to run as a Colab. Click the button on top that says, `Open in Colab`, to run this notebook as a Colab. Running the notebook on your local machine might result in some of the code blocks throwing errors. Run the code below to download the dataset `horse-or-human.zip`. ###Code !gdown --id 1onaG42NZft3wCE1WH0GDEbUhu75fedP5 ###Output _____no_output_____ ###Markdown The following python code will use the OS library to use Operating System libraries, giving you access to the file system, and the zipfile library allowing you to unzip the data. ###Code import os import zipfile local_zip = './horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('./horse-or-human') zip_ref.close() ###Output _____no_output_____ ###Markdown The contents of the .zip are extracted to the base directory `./horse-or-human`, which in turn each contain `horses` and `humans` subdirectories.In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like', 'this is what a human looks like' etc. One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. If you remember with the handwriting example earlier, we had labelled 'this is a 1', 'this is a 7' etc. Later you'll see something called an ImageGenerator being used -- and this is coded to read images from subdirectories, and automatically label them from the name of that subdirectory. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. ImageGenerator will label the images appropriately for you, reducing a coding step. Let's define each of these directories: ###Code # Directory with our training horse pictures train_horse_dir = os.path.join('./horse-or-human/horses') # Directory with our training human pictures train_human_dir = os.path.join('./horse-or-human/humans') ###Output _____no_output_____ ###Markdown Now, let's see what the filenames look like in the `horses` and `humans` training directories: ###Code train_horse_names = os.listdir(train_horse_dir) print(train_horse_names[:10]) train_human_names = os.listdir(train_human_dir) print(train_human_names[:10]) ###Output _____no_output_____ ###Markdown Let's find out the total number of horse and human images in the directories: ###Code print('total training horse images:', len(os.listdir(train_horse_dir))) print('total training human images:', len(os.listdir(train_human_dir))) ###Output _____no_output_____ ###Markdown Now let's take a look at a few pictures to get a better sense of what they look like. First, configure the matplot parameters: ###Code %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg # Parameters for our graph; we'll output images in a 4x4 configuration nrows = 4 ncols = 4 # Index for iterating over images pic_index = 0 ###Output _____no_output_____ ###Markdown Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time: ###Code # Set up matplotlib fig, and size it to fit 4x4 pics fig = plt.gcf() fig.set_size_inches(ncols * 4, nrows * 4) pic_index += 8 next_horse_pix = [os.path.join(train_horse_dir, fname) for fname in train_horse_names[pic_index-8:pic_index]] next_human_pix = [os.path.join(train_human_dir, fname) for fname in train_human_names[pic_index-8:pic_index]] for i, img_path in enumerate(next_horse_pix+next_human_pix): # Set up subplot; subplot indices start at 1 sp = plt.subplot(nrows, ncols, i + 1) sp.axis('Off') # Don't show axes (or gridlines) img = mpimg.imread(img_path) plt.imshow(img) plt.show() ###Output _____no_output_____ ###Markdown Building a Small Model from ScratchBut before we continue, let's start defining the model:Step 1 will be to import tensorflow. ###Code import tensorflow as tf ###Output _____no_output_____ ###Markdown We then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers. Finally we add the densely connected layers. Note that because we are facing a two-class classification problem, i.e. a *binary classification problem*, we will end our network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function), so that the output of our network will be a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0). ###Code model = tf.keras.models.Sequential([ # Note the input shape is the desired size of the image 300x300 with 3 bytes color # This is the first convolution tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)), tf.keras.layers.MaxPooling2D(2, 2), # The second convolution tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The third convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fourth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fifth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # Flatten the results to feed into a DNN tf.keras.layers.Flatten(), # 512 neuron hidden layer tf.keras.layers.Dense(512, activation='relu'), # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans') tf.keras.layers.Dense(1, activation='sigmoid') ]) ###Output _____no_output_____ ###Markdown The model.summary() method call prints a summary of the NN ###Code model.summary() ###Output _____no_output_____ ###Markdown The "output shape" column shows how the size of your feature map evolves in each successive layer. The convolution layers reduce the size of the feature maps by a bit due to padding, and each pooling layer halves the dimensions. Next, we'll configure the specifications for model training. We will train our model with the `binary_crossentropy` loss, because it's a binary classification problem and our final activation is a sigmoid. (For a refresher on loss metrics, see the [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) We will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, we will want to monitor classification accuracy.**NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descentRMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descentAdam) and [Adagrad](https://developers.google.com/machine-learning/glossary/AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.) ###Code from tensorflow.keras.optimizers import RMSprop model.compile(loss='binary_crossentropy', optimizer=RMSprop(learning_rate=0.001), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Data PreprocessingLet's set up data generators that will read pictures in our source folders, convert them to `float32` tensors, and feed them (with their labels) to our network. We'll have one generator for the training images and one for the validation images. Our generators will yield batches of images of size 300x300 and their labels (binary).As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network. (It is uncommon to feed raw pixels into a convnet.) In our case, we will preprocess our images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).In Keras this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`. These generators can then be used with the Keras model methods that accept data generators as inputs: `fit`, `evaluate_generator`, and `predict_generator`. ###Code from tensorflow.keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1/255) # Flow training images in batches of 128 using train_datagen generator train_generator = train_datagen.flow_from_directory( './horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 150x150 batch_size=128, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') ###Output _____no_output_____ ###Markdown TrainingLet's train for 15 epochs -- this may take a few minutes to run.Do note the values per epoch.The Loss and Accuracy are a great indication of progress of training. It's making a guess as to the classification of the training data, and then measuring it against the known label, calculating the result. Accuracy is the portion of correct guesses. ###Code history = model.fit( train_generator, steps_per_epoch=8, epochs=15, verbose=1) ###Output _____no_output_____ ###Markdown Running the ModelLet's now take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, it will then upload them, and run them through the model, giving an indication of whether the object is a horse or a human.**Important Note:** Due to some compatibility issues, the following code block will result in an error after you select the images(s) to upload if you are running this notebook as a `Colab` on the `Safari` browser. For `all other broswers`, continue with the next code block and ignore the next one after it.The ones running the `Colab` on `Safari`, comment out the code block below, uncomment the next code block and run it. ###Code import numpy as np from google.colab import files from keras.preprocessing import image uploaded = files.upload() for fn in uploaded.keys(): # predicting images path = '/content/' + fn img = image.load_img(path, target_size=(300, 300)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) images = np.vstack([x]) classes = model.predict(images, batch_size=10) print(classes[0]) if classes[0]>0.5: print(fn + " is a human") else: print(fn + " is a horse") ###Output _____no_output_____ ###Markdown For those running this `Colab` on `Safari` broswer can upload the images(s) manually. Follow the instructions, uncomment the code block below and run it.Instructions on how to upload image(s) manually in a Colab:1. Select the `folder` icon on the left `menu bar`.2. Click on the `folder with an arrow pointing upwards` named `..`3. Click on the `folder` named `tmp`.4. Inside of the `tmp` folder, `create a new folder` called `images`. You'll see the `New folder` option by clicking the `3 vertical dots` menu button next to the `tmp` folder.5. Inside of the new `images` folder, upload an image(s) of your choice, preferably of either a horse or a human. Drag and drop the images(s) on top of the `images` folder.6. Uncomment and run the code block below. ###Code # import numpy as np # from keras.preprocessing import image # import os # images = os.listdir("/tmp/images") # print(images) # for i in images: # print() # # predicting images # path = '/tmp/images/' + i # img = image.load_img(path, target_size=(300, 300)) # x = image.img_to_array(img) # x = np.expand_dims(x, axis=0) # images = np.vstack([x]) # classes = model.predict(images, batch_size=10) # print(classes[0]) # if classes[0]>0.5: # print(i + " is a human") # else: # print(i + " is a horse") ###Output _____no_output_____ ###Markdown Visualizing Intermediate RepresentationsTo get a feel for what kind of features our convnet has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the convnet.Let's pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images. ###Code import numpy as np import random from tensorflow.keras.preprocessing.image import img_to_array, load_img # Let's define a new Model that will take an image as input, and will output # intermediate representations for all layers in the previous model after # the first. successive_outputs = [layer.output for layer in model.layers[1:]] #visualization_model = Model(img_input, successive_outputs) visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs) # Let's prepare a random input image from the training set. horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names] human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names] img_path = random.choice(horse_img_files + human_img_files) img = load_img(img_path, target_size=(300, 300)) # this is a PIL image x = img_to_array(img) # Numpy array with shape (150, 150, 3) x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 150, 150, 3) # Rescale by 1/255 x /= 255 # Let's run our image through our network, thus obtaining all # intermediate representations for this image. successive_feature_maps = visualization_model.predict(x) # These are the names of the layers, so can have them as part of our plot layer_names = [layer.name for layer in model.layers[1:]] # Now let's display our representations for layer_name, feature_map in zip(layer_names, successive_feature_maps): if len(feature_map.shape) == 4: # Just do this for the conv / maxpool layers, not the fully-connected layers n_features = feature_map.shape[-1] # number of features in feature map # The feature map has shape (1, size, size, n_features) size = feature_map.shape[1] # We will tile our images in this matrix display_grid = np.zeros((size, size * n_features)) for i in range(n_features): # Postprocess the feature to make it visually palatable x = feature_map[0, :, :, i] x -= x.mean() x /= x.std() x *= 64 x += 128 x = np.clip(x, 0, 255).astype('uint8') # We'll tile each filter into this big horizontal grid display_grid[:, i * size : (i + 1) * size] = x # Display the grid scale = 20. / n_features plt.figure(figsize=(scale * n_features, scale)) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') ###Output _____no_output_____ ###Markdown As you can see we go from the raw pixels of the images to increasingly abstract and compact representations. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being "activated"; most are set to zero. This is called "sparsity." Representation sparsity is a key feature of deep learning.These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline. Clean UpBefore running the next exercise, run the following cell to terminate the kernel and free memory resources: ###Code # import os, signal # os.kill(os.getpid(), signal.SIGKILL) ###Output _____no_output_____ ###Markdown Ungraded Lab: Training with ImageDataGeneratorIn this lab, you will build a train a model on the [Horses or Humans](https://www.tensorflow.org/datasets/catalog/horses_or_humans) dataset. This contains over a thousand images of horses and humans with varying poses and filesizes. You will use the [ImageDataGenerator](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) class to prepare this dataset so it can be fed to a convolutional neural network.**IMPORTANT NOTE:** This notebook is designed to run as a Colab. Running it on your local machine might result in some of the code blocks throwing errors. Run the code below to download the compressed dataset `horse-or-human.zip`. ###Code !wget https://storage.googleapis.com/tensorflow-1-public/course2/week3/horse-or-human.zip ###Output _____no_output_____ ###Markdown You can then unzip the archive using the [zipfile](https://docs.python.org/3/library/zipfile.html) module. ###Code import zipfile # Unzip the dataset local_zip = './horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('./horse-or-human') zip_ref.close() ###Output _____no_output_____ ###Markdown The contents of the .zip are extracted to the base directory `./horse-or-human`, which in turn each contain `horses` and `humans` subdirectories.In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like' and 'this is what a human looks like'.One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. You will use the ImageDataGenerator API instead -- and this is coded to automatically label images according to the directory names and structure. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. `ImageDataGenerator` will label the images appropriately for you, reducing a coding step. You can now define each of these directories: ###Code import os # Directory with our training horse pictures train_horse_dir = os.path.join('./horse-or-human/horses') # Directory with our training human pictures train_human_dir = os.path.join('./horse-or-human/humans') ###Output _____no_output_____ ###Markdown Now see what the filenames look like in the `horses` and `humans` training directories: ###Code train_horse_names = os.listdir(train_horse_dir) print(train_horse_names[:10]) train_human_names = os.listdir(train_human_dir) print(train_human_names[:10]) ###Output _____no_output_____ ###Markdown You can also find out the total number of horse and human images in the directories: ###Code print('total training horse images:', len(os.listdir(train_horse_dir))) print('total training human images:', len(os.listdir(train_human_dir))) ###Output _____no_output_____ ###Markdown Now take a look at a few pictures to get a better sense of what they look like. First, configure the `matplotlib` parameters: ###Code %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg # Parameters for our graph; we'll output images in a 4x4 configuration nrows = 4 ncols = 4 # Index for iterating over images pic_index = 0 ###Output _____no_output_____ ###Markdown Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time: ###Code # Set up matplotlib fig, and size it to fit 4x4 pics fig = plt.gcf() fig.set_size_inches(ncols * 4, nrows * 4) pic_index += 8 next_horse_pix = [os.path.join(train_horse_dir, fname) for fname in train_horse_names[pic_index-8:pic_index]] next_human_pix = [os.path.join(train_human_dir, fname) for fname in train_human_names[pic_index-8:pic_index]] for i, img_path in enumerate(next_horse_pix+next_human_pix): # Set up subplot; subplot indices start at 1 sp = plt.subplot(nrows, ncols, i + 1) sp.axis('Off') # Don't show axes (or gridlines) img = mpimg.imread(img_path) plt.imshow(img) plt.show() ###Output _____no_output_____ ###Markdown Building a Small Model from ScratchNow you can define the model architecture that you will train.Step 1 will be to import tensorflow. ###Code import tensorflow as tf ###Output _____no_output_____ ###Markdown You then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers. Note that because this is a two-class classification problem, i.e. a *binary classification problem*, you will end your network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function). This makes the output value of your network a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0). ###Code model = tf.keras.models.Sequential([ # Note the input shape is the desired size of the image 300x300 with 3 bytes color # This is the first convolution tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)), tf.keras.layers.MaxPooling2D(2, 2), # The second convolution tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The third convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fourth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fifth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # Flatten the results to feed into a DNN tf.keras.layers.Flatten(), # 512 neuron hidden layer tf.keras.layers.Dense(512, activation='relu'), # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans') tf.keras.layers.Dense(1, activation='sigmoid') ]) ###Output _____no_output_____ ###Markdown You can review the network architecture and the output shapes with `model.summary()`. ###Code model.summary() ###Output _____no_output_____ ###Markdown The "output shape" column shows how the size of your feature map evolves in each successive layer. As you saw in an earlier lesson, the convolution layers removes the outermost pixels of the image, and each pooling layer halves the dimensions. Next, you'll configure the specifications for model training. You will train the model with the [`binary_crossentropy`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy) loss because it's a binary classification problem, and the final activation is a sigmoid. (For a refresher on loss metrics, see this [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) You will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, you will want to monitor classification accuracy.**NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descentRMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descentAdam) and [Adagrad](https://developers.google.com/machine-learning/glossary/AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.) ###Code from tensorflow.keras.optimizers import RMSprop model.compile(loss='binary_crossentropy', optimizer=RMSprop(learning_rate=0.001), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Data PreprocessingNext step is to set up the data generators that will read pictures in the source folders, convert them to `float32` tensors, and feed them (with their labels) to the model. You'll have one generator for the training images and one for the validation images. These generators will yield batches of images of size 300x300 and their labels (binary).As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network (i.e. It is uncommon to feed raw pixels into a ConvNet.) In this case, you will preprocess the images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).In Keras, this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`. ###Code from tensorflow.keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1/255) # Flow training images in batches of 128 using train_datagen generator train_generator = train_datagen.flow_from_directory( './horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 300x300 batch_size=128, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') ###Output _____no_output_____ ###Markdown TrainingYou can start training for 15 epochs -- this may take a few minutes to run.Do note the values per epoch.The `loss` and `accuracy` are great indicators of progress in training. `loss` measures the current model prediction against the known labels, calculating the result. `accuracy`, on the other hand, is the portion of correct guesses. ###Code history = model.fit( train_generator, steps_per_epoch=8, epochs=15, verbose=1) ###Output _____no_output_____ ###Markdown Model PredictionNow take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, upload them, and run them through the model, giving an indication of whether the object is a horse or a human.**Important Note:** Due to some compatibility issues, the following code block will result in an error after you select the images(s) to upload if you are running this notebook as a `Colab` on the `Safari` browser. For all other browsers, continue with the next code block and ignore the next one after it._For Safari users: please comment out or skip the code block below, uncomment the next code block and run it._ ###Code ## CODE BLOCK FOR NON-SAFARI BROWSERS ## SAFARI USERS: PLEASE SKIP THIS BLOCK AND RUN THE NEXT ONE INSTEAD import numpy as np from google.colab import files from keras.preprocessing import image uploaded = files.upload() for fn in uploaded.keys(): # predicting images path = '/content/' + fn img = image.load_img(path, target_size=(300, 300)) x = image.img_to_array(img) x /= 255 x = np.expand_dims(x, axis=0) images = np.vstack([x]) classes = model.predict(images, batch_size=10) print(classes[0]) if classes[0]>0.5: print(fn + " is a human") else: print(fn + " is a horse") ###Output _____no_output_____ ###Markdown `Safari` users will need to upload the images(s) manually in their workspace. Please follow the instructions, uncomment the code block below and run it.Instructions on how to upload image(s) manually in a Colab:1. Select the `folder` icon on the left `menu bar`.2. Click on the `folder with an arrow pointing upwards` named `..`3. Click on the `folder` named `tmp`.4. Inside of the `tmp` folder, `create a new folder` called `images`. You'll see the `New folder` option by clicking the `3 vertical dots` menu button next to the `tmp` folder.5. Inside of the new `images` folder, upload an image(s) of your choice, preferably of either a horse or a human. Drag and drop the images(s) on top of the `images` folder.6. Uncomment and run the code block below. ###Code # # CODE BLOCK FOR SAFARI USERS # import numpy as np # from keras.preprocessing import image # import os # images = os.listdir("/tmp/images") # print(images) # for i in images: # print() # # predicting images # path = '/tmp/images/' + i # img = image.load_img(path, target_size=(300, 300)) # x = image.img_to_array(img) # x /= 255 # x = np.expand_dims(x, axis=0) # images = np.vstack([x]) # classes = model.predict(images, batch_size=10) # print(classes[0]) # if classes[0]>0.5: # print(i + " is a human") # else: # print(i + " is a horse") ###Output _____no_output_____ ###Markdown Visualizing Intermediate RepresentationsTo get a feel for what kind of features your CNN has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the model.You can pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images. ###Code import numpy as np import random from tensorflow.keras.preprocessing.image import img_to_array, load_img # Define a new Model that will take an image as input, and will output # intermediate representations for all layers in the previous model after # the first. successive_outputs = [layer.output for layer in model.layers[1:]] visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs) # Prepare a random input image from the training set. horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names] human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names] img_path = random.choice(horse_img_files + human_img_files) img = load_img(img_path, target_size=(300, 300)) # this is a PIL image x = img_to_array(img) # Numpy array with shape (300, 300, 3) x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 300, 300, 3) # Scale by 1/255 x /= 255 # Run the image through the network, thus obtaining all # intermediate representations for this image. successive_feature_maps = visualization_model.predict(x) # These are the names of the layers, so you can have them as part of the plot layer_names = [layer.name for layer in model.layers[1:]] # Display the representations for layer_name, feature_map in zip(layer_names, successive_feature_maps): if len(feature_map.shape) == 4: # Just do this for the conv / maxpool layers, not the fully-connected layers n_features = feature_map.shape[-1] # number of features in feature map # The feature map has shape (1, size, size, n_features) size = feature_map.shape[1] # Tile the images in this matrix display_grid = np.zeros((size, size * n_features)) for i in range(n_features): x = feature_map[0, :, :, i] x -= x.mean() x /= x.std() x *= 64 x += 128 x = np.clip(x, 0, 255).astype('uint8') # Tile each filter into this big horizontal grid display_grid[:, i * size : (i + 1) * size] = x # Display the grid scale = 20. / n_features plt.figure(figsize=(scale * n_features, scale)) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') ###Output _____no_output_____ ###Markdown You can see above how the pixels highlighted turn to increasingly abstract and compact representations, especially at the bottom grid. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being "activated"; most are set to zero. This is called _representation sparsity_ and is a key feature of deep learning. These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline wherein each layer filters out the most useful features. Clean UpYou will continue with a similar exercise in the next lab but before that, run the following cell to terminate the kernel and free memory resources: ###Code import os, signal os.kill(os.getpid(), signal.SIGKILL) ###Output _____no_output_____ ###Markdown Ungraded Lab: Training with ImageDataGeneratorIn this lab, you will build a train a model on the [Horses or Humans](https://www.tensorflow.org/datasets/catalog/horses_or_humans) dataset. This contains over a thousand images of horses and humans with varying poses and filesizes. You will use the [ImageDataGenerator](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) class to prepare this dataset so it can be fed to a convolutional neural network.**IMPORTANT NOTE:** This notebook is designed to run as a Colab. Running it on your local machine might result in some of the code blocks throwing errors. Run the code below to download the compressed dataset `horse-or-human.zip`. ###Code !wget https://storage.googleapis.com/tensorflow-1-public/course2/week3/horse-or-human.zip ###Output _____no_output_____ ###Markdown You can then unzip the archive using the [zipfile](https://docs.python.org/3/library/zipfile.html) module. ###Code import zipfile # Unzip the dataset local_zip = './horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('./horse-or-human') zip_ref.close() ###Output _____no_output_____ ###Markdown The contents of the .zip are extracted to the base directory `./horse-or-human`, which in turn each contain `horses` and `humans` subdirectories.In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like' and 'this is what a human looks like'.One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. You will use the ImageDataGenerator API instead -- and this is coded to automatically label images according to the directory names and structure. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. `ImageDataGenerator` will label the images appropriately for you, reducing a coding step. You can now define each of these directories: ###Code import os # Directory with our training horse pictures train_horse_dir = os.path.join('./horse-or-human/horses') # Directory with our training human pictures train_human_dir = os.path.join('./horse-or-human/humans') ###Output _____no_output_____ ###Markdown Now see what the filenames look like in the `horses` and `humans` training directories: ###Code train_horse_names = os.listdir(train_horse_dir) print(train_horse_names[:10]) train_human_names = os.listdir(train_human_dir) print(train_human_names[:10]) ###Output _____no_output_____ ###Markdown You can also find out the total number of horse and human images in the directories: ###Code print('total training horse images:', len(os.listdir(train_horse_dir))) print('total training human images:', len(os.listdir(train_human_dir))) ###Output _____no_output_____ ###Markdown Now take a look at a few pictures to get a better sense of what they look like. First, configure the `matplotlib` parameters: ###Code %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg # Parameters for our graph; we'll output images in a 4x4 configuration nrows = 4 ncols = 4 # Index for iterating over images pic_index = 0 ###Output _____no_output_____ ###Markdown Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time: ###Code # Set up matplotlib fig, and size it to fit 4x4 pics fig = plt.gcf() fig.set_size_inches(ncols * 4, nrows * 4) pic_index += 8 next_horse_pix = [os.path.join(train_horse_dir, fname) for fname in train_horse_names[pic_index-8:pic_index]] next_human_pix = [os.path.join(train_human_dir, fname) for fname in train_human_names[pic_index-8:pic_index]] for i, img_path in enumerate(next_horse_pix+next_human_pix): # Set up subplot; subplot indices start at 1 sp = plt.subplot(nrows, ncols, i + 1) sp.axis('Off') # Don't show axes (or gridlines) img = mpimg.imread(img_path) plt.imshow(img) plt.show() ###Output _____no_output_____ ###Markdown Building a Small Model from ScratchNow you can define the model architecture that you will train.Step 1 will be to import tensorflow. ###Code import tensorflow as tf ###Output _____no_output_____ ###Markdown You then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers. Note that because this is a two-class classification problem, i.e. a *binary classification problem*, you will end your network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function). This makes the output value of your network a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0). ###Code model = tf.keras.models.Sequential([ # Note the input shape is the desired size of the image 300x300 with 3 bytes color # This is the first convolution tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)), tf.keras.layers.MaxPooling2D(2, 2), # The second convolution tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The third convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fourth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fifth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # Flatten the results to feed into a DNN tf.keras.layers.Flatten(), # 512 neuron hidden layer tf.keras.layers.Dense(512, activation='relu'), # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans') tf.keras.layers.Dense(1, activation='sigmoid') ]) ###Output _____no_output_____ ###Markdown You can review the network architecture and the output shapes with `model.summary()`. ###Code model.summary() ###Output _____no_output_____ ###Markdown The "output shape" column shows how the size of your feature map evolves in each successive layer. As you saw in an earlier lesson, the convolution layers removes the outermost pixels of the image, and each pooling layer halves the dimensions. Next, you'll configure the specifications for model training. You will train the model with the [`binary_crossentropy`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy) loss because it's a binary classification problem, and the final activation is a sigmoid. (For a refresher on loss metrics, see this [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) You will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, you will want to monitor classification accuracy.**NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descentRMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descentAdam) and [Adagrad](https://developers.google.com/machine-learning/glossary/AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.) ###Code from tensorflow.keras.optimizers import RMSprop model.compile(loss='binary_crossentropy', optimizer=RMSprop(learning_rate=0.001), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Data PreprocessingNext step is to set up the data generators that will read pictures in the source folders, convert them to `float32` tensors, and feed them (with their labels) to the model. You'll have one generator for the training images and one for the validation images. These generators will yield batches of images of size 300x300 and their labels (binary).As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network (i.e. It is uncommon to feed raw pixels into a ConvNet.) In this case, you will preprocess the images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).In Keras, this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`. ###Code from tensorflow.keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1/255) # Flow training images in batches of 128 using train_datagen generator train_generator = train_datagen.flow_from_directory( './horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 300x300 batch_size=128, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') ###Output _____no_output_____ ###Markdown TrainingYou can start training for 15 epochs -- this may take a few minutes to run.Do note the values per epoch.The `loss` and `accuracy` are great indicators of progress in training. `loss` measures the current model prediction against the known labels, calculating the result. `accuracy`, on the other hand, is the portion of correct guesses. ###Code history = model.fit( train_generator, steps_per_epoch=8, epochs=15, verbose=1) ###Output _____no_output_____ ###Markdown Model PredictionNow take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, upload them, and run them through the model, giving an indication of whether the object is a horse or a human.**Important Note:** Due to some compatibility issues, the following code block will result in an error after you select the images(s) to upload if you are running this notebook as a `Colab` on the `Safari` browser. For all other browsers, continue with the next code block and ignore the next one after it._For Safari users: please comment out or skip the code block below, uncomment the next code block and run it._ ###Code ## CODE BLOCK FOR NON-SAFARI BROWSERS ## SAFARI USERS: PLEASE SKIP THIS BLOCK AND RUN THE NEXT ONE INSTEAD import numpy as np from google.colab import files from keras.preprocessing import image uploaded = files.upload() for fn in uploaded.keys(): # predicting images path = '/content/' + fn img = image.load_img(path, target_size=(300, 300)) x = image.img_to_array(img) x /= 255 x = np.expand_dims(x, axis=0) images = np.vstack([x]) classes = model.predict(images, batch_size=10) print(classes[0]) if classes[0]>0.5: print(fn + " is a human") else: print(fn + " is a horse") ###Output _____no_output_____ ###Markdown `Safari` users will need to upload the images(s) manually in their workspace. Please follow the instructions, uncomment the code block below and run it.Instructions on how to upload image(s) manually in a Colab:1. Select the `folder` icon on the left `menu bar`.2. Click on the `folder with an arrow pointing upwards` named `..`3. Click on the `folder` named `tmp`.4. Inside of the `tmp` folder, `create a new folder` called `images`. You'll see the `New folder` option by clicking the `3 vertical dots` menu button next to the `tmp` folder.5. Inside of the new `images` folder, upload an image(s) of your choice, preferably of either a horse or a human. Drag and drop the images(s) on top of the `images` folder.6. Uncomment and run the code block below. ###Code # # CODE BLOCK FOR SAFARI USERS # import numpy as np # from keras.preprocessing import image # import os # images = os.listdir("/tmp/images") # print(images) # for i in images: # print() # # predicting images # path = '/tmp/images/' + i # img = image.load_img(path, target_size=(300, 300)) # x = image.img_to_array(img) # x /= 255 # x = np.expand_dims(x, axis=0) # images = np.vstack([x]) # classes = model.predict(images, batch_size=10) # print(classes[0]) # if classes[0]>0.5: # print(i + " is a human") # else: # print(i + " is a horse") ###Output _____no_output_____ ###Markdown Visualizing Intermediate RepresentationsTo get a feel for what kind of features your CNN has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the model.You can pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images. ###Code import numpy as np import random from tensorflow.keras.preprocessing.image import img_to_array, load_img # Define a new Model that will take an image as input, and will output # intermediate representations for all layers in the previous model after # the first. successive_outputs = [layer.output for layer in model.layers[1:]] visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs) # Prepare a random input image from the training set. horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names] human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names] img_path = random.choice(horse_img_files + human_img_files) img = load_img(img_path, target_size=(300, 300)) # this is a PIL image x = img_to_array(img) # Numpy array with shape (300, 300, 3) x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 300, 300, 3) # Scale by 1/255 x /= 255 # Run the image through the network, thus obtaining all # intermediate representations for this image. successive_feature_maps = visualization_model.predict(x) # These are the names of the layers, so you can have them as part of the plot layer_names = [layer.name for layer in model.layers[1:]] # Display the representations for layer_name, feature_map in zip(layer_names, successive_feature_maps): if len(feature_map.shape) == 4: # Just do this for the conv / maxpool layers, not the fully-connected layers n_features = feature_map.shape[-1] # number of features in feature map # The feature map has shape (1, size, size, n_features) size = feature_map.shape[1] # Tile the images in this matrix display_grid = np.zeros((size, size * n_features)) for i in range(n_features): x = feature_map[0, :, :, i] x -= x.mean() x /= x.std() x *= 64 x += 128 x = np.clip(x, 0, 255).astype('uint8') # Tile each filter into this big horizontal grid display_grid[:, i * size : (i + 1) * size] = x # Display the grid scale = 20. / n_features plt.figure(figsize=(scale * n_features, scale)) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') ###Output _____no_output_____ ###Markdown You can see above how the pixels highlighted turn to increasingly abstract and compact representations, especially at the bottom grid. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being "activated"; most are set to zero. This is called _representation sparsity_ and is a key feature of deep learning. These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline wherein each layer filters out the most useful features. Clean UpYou will continue with a similar exercise in the next lab but before that, run the following cell to terminate the kernel and free memory resources: ###Code import os, signal os.kill(os.getpid(), signal.SIGKILL) ###Output _____no_output_____ ###Markdown Ungraded Lab: Training with ImageDataGeneratorIn this lab, you will build a train a model on the [Horses or Humans](https://www.tensorflow.org/datasets/catalog/horses_or_humans) dataset. This contains over a thousand images of horses and humans with varying poses and filesizes. You will use the [ImageDataGenerator](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) class to prepare this dataset so it can be fed to a convolutional neural network.**IMPORTANT NOTE:** This notebook is designed to run as a Colab. Running it on your local machine might result in some of the code blocks throwing errors. Run the code below to download the compressed dataset `horse-or-human.zip`. ###Code !gdown --id 1onaG42NZft3wCE1WH0GDEbUhu75fedP5 ###Output _____no_output_____ ###Markdown *Troubleshooting: If you get a download error saying "Cannot retrieve the public link of the file.", please run the next two cells below to download the dataset. Otherwise, please skip them.* ###Code %%writefile download.sh #!/bin/bash fileid="1onaG42NZft3wCE1WH0GDEbUhu75fedP5" filename="horse-or-human.zip" html=`curl -c ./cookie -s -L "https://drive.google.com/uc?export=download&id=${fileid}"` curl -Lb ./cookie "https://drive.google.com/uc?export=download&`echo ${html}|grep -Po '(confirm=[a-zA-Z0-9\-_]+)'`&id=${fileid}" -o ${filename} # NOTE: Please only run this if downloading with gdown did not work. # This will run the script created above. !bash download.sh ###Output _____no_output_____ ###Markdown You can then unzip the archive using the [zipfile](https://docs.python.org/3/library/zipfile.html) module. ###Code import zipfile # Unzip the dataset local_zip = './horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('./horse-or-human') zip_ref.close() ###Output _____no_output_____ ###Markdown The contents of the .zip are extracted to the base directory `./horse-or-human`, which in turn each contain `horses` and `humans` subdirectories.In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like' and 'this is what a human looks like'.One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. You will use the ImageDataGenerator API instead -- and this is coded to automatically label images according to the directory names and structure. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. `ImageDataGenerator` will label the images appropriately for you, reducing a coding step. You can now define each of these directories: ###Code import os # Directory with our training horse pictures train_horse_dir = os.path.join('./horse-or-human/horses') # Directory with our training human pictures train_human_dir = os.path.join('./horse-or-human/humans') ###Output _____no_output_____ ###Markdown Now see what the filenames look like in the `horses` and `humans` training directories: ###Code train_horse_names = os.listdir(train_horse_dir) print(train_horse_names[:10]) train_human_names = os.listdir(train_human_dir) print(train_human_names[:10]) ###Output _____no_output_____ ###Markdown You can also find out the total number of horse and human images in the directories: ###Code print('total training horse images:', len(os.listdir(train_horse_dir))) print('total training human images:', len(os.listdir(train_human_dir))) ###Output _____no_output_____ ###Markdown Now take a look at a few pictures to get a better sense of what they look like. First, configure the `matplotlib` parameters: ###Code %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg # Parameters for our graph; we'll output images in a 4x4 configuration nrows = 4 ncols = 4 # Index for iterating over images pic_index = 0 ###Output _____no_output_____ ###Markdown Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time: ###Code # Set up matplotlib fig, and size it to fit 4x4 pics fig = plt.gcf() fig.set_size_inches(ncols * 4, nrows * 4) pic_index += 8 next_horse_pix = [os.path.join(train_horse_dir, fname) for fname in train_horse_names[pic_index-8:pic_index]] next_human_pix = [os.path.join(train_human_dir, fname) for fname in train_human_names[pic_index-8:pic_index]] for i, img_path in enumerate(next_horse_pix+next_human_pix): # Set up subplot; subplot indices start at 1 sp = plt.subplot(nrows, ncols, i + 1) sp.axis('Off') # Don't show axes (or gridlines) img = mpimg.imread(img_path) plt.imshow(img) plt.show() ###Output _____no_output_____ ###Markdown Building a Small Model from ScratchNow you can define the model architecture that you will train.Step 1 will be to import tensorflow. ###Code import tensorflow as tf ###Output _____no_output_____ ###Markdown You then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers. Note that because this is a two-class classification problem, i.e. a *binary classification problem*, you will end your network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function). This makes the output value of your network a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0). ###Code model = tf.keras.models.Sequential([ # Note the input shape is the desired size of the image 300x300 with 3 bytes color # This is the first convolution tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)), tf.keras.layers.MaxPooling2D(2, 2), # The second convolution tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The third convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fourth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fifth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # Flatten the results to feed into a DNN tf.keras.layers.Flatten(), # 512 neuron hidden layer tf.keras.layers.Dense(512, activation='relu'), # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans') tf.keras.layers.Dense(1, activation='sigmoid') ]) ###Output _____no_output_____ ###Markdown You can review the network architecture and the output shapes with `model.summary()`. ###Code model.summary() ###Output _____no_output_____ ###Markdown The "output shape" column shows how the size of your feature map evolves in each successive layer. As you saw in an earlier lesson, the convolution layers removes the outermost pixels of the image, and each pooling layer halves the dimensions. Next, you'll configure the specifications for model training. You will train the model with the [`binary_crossentropy`](https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy) loss because it's a binary classification problem, and the final activation is a sigmoid. (For a refresher on loss metrics, see this [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) You will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, you will want to monitor classification accuracy.**NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descentRMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descentAdam) and [Adagrad](https://developers.google.com/machine-learning/glossary/AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.) ###Code from tensorflow.keras.optimizers import RMSprop model.compile(loss='binary_crossentropy', optimizer=RMSprop(learning_rate=0.001), metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Data PreprocessingNext step is to set up the data generators that will read pictures in the source folders, convert them to `float32` tensors, and feed them (with their labels) to the model. You'll have one generator for the training images and one for the validation images. These generators will yield batches of images of size 300x300 and their labels (binary).As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network (i.e. It is uncommon to feed raw pixels into a ConvNet.) In this case, you will preprocess the images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range).In Keras, this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`. ###Code from tensorflow.keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1/255) # Flow training images in batches of 128 using train_datagen generator train_generator = train_datagen.flow_from_directory( './horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 300x300 batch_size=128, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') ###Output _____no_output_____ ###Markdown TrainingYou can start training for 15 epochs -- this may take a few minutes to run.Do note the values per epoch.The `loss` and `accuracy` are great indicators of progress in training. `loss` measures the current model prediction against the known labels, calculating the result. `accuracy`, on the other hand, is the portion of correct guesses. ###Code history = model.fit( train_generator, steps_per_epoch=8, epochs=15, verbose=1) ###Output _____no_output_____ ###Markdown Model PredictionNow take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, upload them, and run them through the model, giving an indication of whether the object is a horse or a human.**Important Note:** Due to some compatibility issues, the following code block will result in an error after you select the images(s) to upload if you are running this notebook as a `Colab` on the `Safari` browser. For all other browsers, continue with the next code block and ignore the next one after it._For Safari users: please comment out or skip the code block below, uncomment the next code block and run it._ ###Code ## CODE BLOCK FOR NON-SAFARI BROWSERS ## SAFARI USERS: PLEASE SKIP THIS BLOCK AND RUN THE NEXT ONE INSTEAD import numpy as np from google.colab import files from keras.preprocessing import image uploaded = files.upload() for fn in uploaded.keys(): # predicting images path = '/content/' + fn img = image.load_img(path, target_size=(300, 300)) x = image.img_to_array(img) x /= 255 x = np.expand_dims(x, axis=0) images = np.vstack([x]) classes = model.predict(images, batch_size=10) print(classes[0]) if classes[0]>0.5: print(fn + " is a human") else: print(fn + " is a horse") ###Output _____no_output_____ ###Markdown `Safari` users will need to upload the images(s) manually in their workspace. Please follow the instructions, uncomment the code block below and run it.Instructions on how to upload image(s) manually in a Colab:1. Select the `folder` icon on the left `menu bar`.2. Click on the `folder with an arrow pointing upwards` named `..`3. Click on the `folder` named `tmp`.4. Inside of the `tmp` folder, `create a new folder` called `images`. You'll see the `New folder` option by clicking the `3 vertical dots` menu button next to the `tmp` folder.5. Inside of the new `images` folder, upload an image(s) of your choice, preferably of either a horse or a human. Drag and drop the images(s) on top of the `images` folder.6. Uncomment and run the code block below. ###Code # # CODE BLOCK FOR SAFARI USERS # import numpy as np # from keras.preprocessing import image # import os # images = os.listdir("/tmp/images") # print(images) # for i in images: # print() # # predicting images # path = '/tmp/images/' + i # img = image.load_img(path, target_size=(300, 300)) # x = image.img_to_array(img) # x /= 255 # x = np.expand_dims(x, axis=0) # images = np.vstack([x]) # classes = model.predict(images, batch_size=10) # print(classes[0]) # if classes[0]>0.5: # print(i + " is a human") # else: # print(i + " is a horse") ###Output _____no_output_____ ###Markdown Visualizing Intermediate RepresentationsTo get a feel for what kind of features your CNN has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the model.You can pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images. ###Code import numpy as np import random from tensorflow.keras.preprocessing.image import img_to_array, load_img # Define a new Model that will take an image as input, and will output # intermediate representations for all layers in the previous model after # the first. successive_outputs = [layer.output for layer in model.layers[1:]] visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs) # Prepare a random input image from the training set. horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names] human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names] img_path = random.choice(horse_img_files + human_img_files) img = load_img(img_path, target_size=(300, 300)) # this is a PIL image x = img_to_array(img) # Numpy array with shape (300, 300, 3) x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 300, 300, 3) # Scale by 1/255 x /= 255 # Run the image through the network, thus obtaining all # intermediate representations for this image. successive_feature_maps = visualization_model.predict(x) # These are the names of the layers, so you can have them as part of the plot layer_names = [layer.name for layer in model.layers[1:]] # Display the representations for layer_name, feature_map in zip(layer_names, successive_feature_maps): if len(feature_map.shape) == 4: # Just do this for the conv / maxpool layers, not the fully-connected layers n_features = feature_map.shape[-1] # number of features in feature map # The feature map has shape (1, size, size, n_features) size = feature_map.shape[1] # Tile the images in this matrix display_grid = np.zeros((size, size * n_features)) for i in range(n_features): x = feature_map[0, :, :, i] x -= x.mean() x /= x.std() x *= 64 x += 128 x = np.clip(x, 0, 255).astype('uint8') # Tile each filter into this big horizontal grid display_grid[:, i * size : (i + 1) * size] = x # Display the grid scale = 20. / n_features plt.figure(figsize=(scale * n_features, scale)) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') ###Output _____no_output_____ ###Markdown You can see above how the pixels highlighted turn to increasingly abstract and compact representations, especially at the bottom grid. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being "activated"; most are set to zero. This is called _representation sparsity_ and is a key feature of deep learning. These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline wherein each layer filters out the most useful features. Clean UpYou will continue with a similar exercise in the next lab but before that, run the following cell to terminate the kernel and free memory resources: ###Code import os, signal os.kill(os.getpid(), signal.SIGKILL) ###Output _____no_output_____
examples/tutorial/jupyter/execution/pandas_on_dask/local/exercise_2.ipynb
###Markdown ![LOGO](../../../img/MODIN_ver2_hrz.png)Scale your pandas workflows by changing one line of code Exercise 2: Speed improvements**GOAL**: Learn about common functionality that Modin speeds up by using all of your machine's cores. Concept for Exercise: `read_csv` speedupsThe most commonly used data ingestion method used in pandas is CSV files (link to pandas survey). This concept is designed to give an idea of the kinds of speedups possible, even on a non-distributed filesystem. Modin also supports other file formats for parallel and distributed reads, which can be found in the documentation. We will import both Modin and pandas so that the speedups are evident.**Note: Rerunning the `read_csv` cells many times may result in degraded performance, depending on the memory of the machine** ###Code import modin.pandas as pd import pandas import time from IPython.display import Markdown, display def printmd(string): display(Markdown(string)) ###Output _____no_output_____ ###Markdown Dataset: 2015 NYC taxi trip dataWe will be using a version of this data already in S3, originally posted in this blog post: https://matthewrocklin.com/blog/work/2017/01/12/dask-dataframes**Size: ~1.8GB** ###Code path = "s3://dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv" ###Output _____no_output_____ ###Markdown Modin execution engine setting: ###Code import modin.config as cfg cfg.Engine.put("dask") ###Output _____no_output_____ ###Markdown `pandas.read_csv` ###Code start = time.time() pandas_df = pandas.read_csv(path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3) end = time.time() pandas_duration = end - start print("Time to read with pandas: {} seconds".format(round(pandas_duration, 3))) ###Output _____no_output_____ ###Markdown Expect pandas to take >3 minutes on EC2, longer locallyThis is a good time to chat with your neighborDicussion topics- Do you work with a large amount of data daily?- How big is your data?- What’s the common use case of your data?- Do you use any big data analytics tools?- Do you use any interactive analytics tool?- What’s are some drawbacks of your current interative analytic tools today? `modin.pandas.read_csv` ###Code start = time.time() modin_df = pd.read_csv(path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3) end = time.time() modin_duration = end - start print("Time to read with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas at `read_csv`!".format(round(pandas_duration / modin_duration, 2))) ###Output _____no_output_____ ###Markdown Are they equal? ###Code pandas_df modin_df ###Output _____no_output_____ ###Markdown Concept for exercise: ReducesIn pandas, a reduce would be something along the lines of a `sum` or `count`. It computes some summary statistics about the rows or columns. We will be using `count`. ###Code start = time.time() pandas_count = pandas_df.count() end = time.time() pandas_duration = end - start print("Time to count with pandas: {} seconds".format(round(pandas_duration, 3))) start = time.time() modin_count = modin_df.count() end = time.time() modin_duration = end - start print("Time to count with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas at `count`!".format(round(pandas_duration / modin_duration, 2))) ###Output _____no_output_____ ###Markdown Are they equal? ###Code pandas_count modin_count ###Output _____no_output_____ ###Markdown Concept for exercise: Map operationsIn pandas, map operations are operations that do a single pass over the data and do not change its shape. Operations like `isnull` and `applymap` are included in this. We will be using `isnull`. ###Code start = time.time() pandas_isnull = pandas_df.isnull() end = time.time() pandas_duration = end - start print("Time to isnull with pandas: {} seconds".format(round(pandas_duration, 3))) start = time.time() modin_isnull = modin_df.isnull() end = time.time() modin_duration = end - start print("Time to isnull with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas at `isnull`!".format(round(pandas_duration / modin_duration, 2))) ###Output _____no_output_____ ###Markdown Are they equal? ###Code pandas_isnull modin_isnull ###Output _____no_output_____ ###Markdown Concept for exercise: Apply over a single columnSometimes we want to compute some summary statistics on a single column from our dataset. ###Code start = time.time() rounded_trip_distance_pandas = pandas_df["trip_distance"].apply(round) end = time.time() pandas_duration = end - start print("Time to groupby with pandas: {} seconds".format(round(pandas_duration, 3))) start = time.time() rounded_trip_distance_modin = modin_df["trip_distance"].apply(round) end = time.time() modin_duration = end - start print("Time to add a column with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas at `apply` on one column!".format(round(pandas_duration / modin_duration, 2))) ###Output _____no_output_____ ###Markdown Are they equal? ###Code rounded_trip_distance_pandas rounded_trip_distance_modin ###Output _____no_output_____ ###Markdown Concept for exercise: Add a columnIt is common to need to add a new column to an existing dataframe, here we show that this is significantly faster in Modin due to metadata management and an efficient zero copy implementation. ###Code start = time.time() pandas_df["rounded_trip_distance"] = rounded_trip_distance_pandas end = time.time() pandas_duration = end - start print("Time to groupby with pandas: {} seconds".format(round(pandas_duration, 3))) start = time.time() modin_df["rounded_trip_distance"] = rounded_trip_distance_modin end = time.time() modin_duration = end - start print("Time to add a column with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas add a column!".format(round(pandas_duration / modin_duration, 2))) ###Output _____no_output_____ ###Markdown Are they equal? ###Code pandas_df modin_df ###Output _____no_output_____ ###Markdown ![LOGO](../../../img/MODIN_ver2_hrz.png)Scale your pandas workflows by changing one line of code Exercise 2: Speed improvements**GOAL**: Learn about common functionality that Modin speeds up by using all of your machine's cores. Concept for Exercise: `read_csv` speedupsThe most commonly used data ingestion method used in pandas is CSV files (link to pandas survey). This concept is designed to give an idea of the kinds of speedups possible, even on a non-distributed filesystem. Modin also supports other file formats for parallel and distributed reads, which can be found in the documentation. We will import both Modin and pandas so that the speedups are evident.**Note: Rerunning the `read_csv` cells many times may result in degraded performance, depending on the memory of the machine** ###Code import modin.pandas as pd import pandas import time from IPython.display import Markdown, display def printmd(string): display(Markdown(string)) ###Output _____no_output_____ ###Markdown Dataset: 2015 NYC taxi trip dataLink to raw dataset: https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.pageWe will be using a version of this data already in S3, originally posted in this blog post: https://matthewrocklin.com/blog/work/2017/01/12/dask-dataframes**Size: ~2GB** ###Code path = "s3://dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv" ###Output _____no_output_____ ###Markdown Modin execution engine setting: ###Code import modin.config as cfg cfg.Engine.put("dask") ###Output _____no_output_____ ###Markdown `pandas.read_csv` ###Code start = time.time() pandas_df = pandas.read_csv(path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3) end = time.time() pandas_duration = end - start print("Time to read with pandas: {} seconds".format(round(pandas_duration, 3))) ###Output _____no_output_____ ###Markdown Expect pandas to take >3 minutes on EC2, longer locallyThis is a good time to chat with your neighborDicussion topics- Do you work with a large amount of data daily?- How big is your data?- What’s the common use case of your data?- Do you use any big data analytics tools?- Do you use any interactive analytics tool?- What’s are some drawbacks of your current interative analytic tools today? `modin.pandas.read_csv` ###Code start = time.time() modin_df = pd.read_csv(path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3) end = time.time() modin_duration = end - start print("Time to read with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas at `read_csv`!".format(round(pandas_duration / modin_duration, 2))) ###Output _____no_output_____ ###Markdown Are they equal? ###Code pandas_df modin_df ###Output _____no_output_____ ###Markdown Concept for exercise: ReducesIn pandas, a reduce would be something along the lines of a `sum` or `count`. It computes some summary statistics about the rows or columns. We will be using `count`. ###Code start = time.time() pandas_count = pandas_df.count() end = time.time() pandas_duration = end - start print("Time to count with pandas: {} seconds".format(round(pandas_duration, 3))) start = time.time() modin_count = modin_df.count() end = time.time() modin_duration = end - start print("Time to count with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas at `count`!".format(round(pandas_duration / modin_duration, 2))) ###Output _____no_output_____ ###Markdown Are they equal? ###Code pandas_count modin_count ###Output _____no_output_____ ###Markdown Concept for exercise: Map operationsIn pandas, map operations are operations that do a single pass over the data and do not change its shape. Operations like `isnull` and `applymap` are included in this. We will be using `isnull`. ###Code start = time.time() pandas_isnull = pandas_df.isnull() end = time.time() pandas_duration = end - start print("Time to isnull with pandas: {} seconds".format(round(pandas_duration, 3))) start = time.time() modin_isnull = modin_df.isnull() end = time.time() modin_duration = end - start print("Time to isnull with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas at `isnull`!".format(round(pandas_duration / modin_duration, 2))) ###Output _____no_output_____ ###Markdown Are they equal? ###Code pandas_isnull modin_isnull ###Output _____no_output_____ ###Markdown Concept for exercise: Apply over a single columnSometimes we want to compute some summary statistics on a single column from our dataset. ###Code start = time.time() rounded_trip_distance_pandas = pandas_df["trip_distance"].apply(round) end = time.time() pandas_duration = end - start print("Time to groupby with pandas: {} seconds".format(round(pandas_duration, 3))) start = time.time() rounded_trip_distance_modin = modin_df["trip_distance"].apply(round) end = time.time() modin_duration = end - start print("Time to add a column with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas at `apply` on one column!".format(round(pandas_duration / modin_duration, 2))) ###Output _____no_output_____ ###Markdown Are they equal? ###Code rounded_trip_distance_pandas rounded_trip_distance_modin ###Output _____no_output_____ ###Markdown Concept for exercise: Add a columnIt is common to need to add a new column to an existing dataframe, here we show that this is significantly faster in Modin due to metadata management and an efficient zero copy implementation. ###Code start = time.time() pandas_df["rounded_trip_distance"] = rounded_trip_distance_pandas end = time.time() pandas_duration = end - start print("Time to groupby with pandas: {} seconds".format(round(pandas_duration, 3))) start = time.time() modin_df["rounded_trip_distance"] = rounded_trip_distance_modin end = time.time() modin_duration = end - start print("Time to add a column with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas add a column!".format(round(pandas_duration / modin_duration, 2))) ###Output _____no_output_____ ###Markdown Are they equal? ###Code pandas_df modin_df ###Output _____no_output_____
Drawing rectangle and semicircle.ipynb
###Markdown Drawing rectangle and semicircle Preparations ###Code # For convenience, let's begin by enabling # automatic reloading of modules when they change. %load_ext autoreload %autoreload 2 import numpy as np import matplotlib.pyplot as plt import numpy as np import matplotlib.pyplot as plt x_asix = np.array([0,0,100,100, 0]) y_asix = np.array([0,100,100,0, 0]) x_coordenates = np.concatenate([ x_asix]) y_coordenates = np.concatenate([y_asix]) plt.plot(x_coordenates, y_coordenates) import matplotlib.pyplot as plt import numpy as np def generate_semicircle(center_x, center_y, radius, stepsize=0.1): """ generates coordinates for a semicircle, centered at center_x, center_y """ x = np.arange(center_x, center_x+radius+stepsize, stepsize) y = np.sqrt(radius**2 - x**2) # since each x value has two corresponding y-values, duplicate x-axis. # [::-1] is required to have the correct order of elements for plt.plot. x = np.concatenate([x,x[::-1]]) # concatenate y and flipped y. y = np.concatenate([y,-y[::-1]]) return x, y + center_y x,y = generate_semicircle(0,50,10, 0.01) plt.plot(x, y) plt.show() ###Output _____no_output_____ ###Markdown An example ###Code import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1) ax.set_aspect('equal') x_asix = np.array([0,0,100,100, 0]) y_asix = np.array([0,100,100,0, 0]) x_coordenates = np.concatenate([ x_asix]) y_coordenates = np.concatenate([y_asix]) ax.plot(x_coordenates, y_coordenates) # ((x - x0) / a) ** 2 + ((y - y0) / b) ** 2 == 1 a = 10 b = 15 x0 = 99 y0 = 99 x = np.linspace(-a + x0, a + x0) y = b * np.sqrt(1 - ((x - x0) / a) ** 2) + y0 ax.plot(y, x) ###Output _____no_output_____ ###Markdown My Trial ###Code ## First will draw a rectangle or a square # semicircle x^2 + y^2 <= r^2 and y>0 ###Output _____no_output_____
.ipynb_checkpoints/CNN_with_brigs-checkpoint.ipynb
###Markdown 由于dataset Y输入不能为sting 这里把 类名字改成数字, 需要提前把lable name 保存。 ###Code import os, sys,shutil a = os.listdir("/data/estir/") for i in a : try : i = int(i) print(i) shutil.rmtree("/data/estir/"+str(i), ignore_errors=False, onerror=None) except: pass ###Output 98 138 68 24 172 69 23 121 179 67 28 82 32 171 87 162 144 106 94 0 143 14 99 132 42 65 35 29 44 15 103 156 145 80 64 131 72 112 63 152 128 10 71 169 1 108 36 91 124 130 105 70 114 25 4 79 52 160 129 111 123 81 137 41 167 17 153 95 16 18 161 56 39 155 76 122 134 148 6 151 46 83 20 37 51 154 9 27 110 180 77 175 13 113 140 50 45 182 86 78 170 177 57 34 31 43 184 40 75 181 118 173 88 97 126 48 12 58 5 119 3 176 59 125 159 115 142 53 135 158 117 74 101 164 150 11 7 165 96 61 107 139 2 141 8 90 116 85 100 54 102 120 66 136 21 26 19 22 33 183 168 174 133 49 89 93 127 62 157 104 55 60 146 38 109 84 30 47 166 178 163 147 149 92 73 ###Markdown 创建预测序列 ###Code pre_list = [ str(i) for i in list(pathlib.Path("/data/birds/valid/").glob("*/*"))] predict_data = list() for a in pre_list[::10]: print(a) a = tf.io.read_file(a).numpy() d =tf.image.resize( tf.image.decode_jpeg(a), [150, 150]) / 255 d = d.numpy() plt.imshow(d) d = d.reshape(1,150,150,3).tolist() predict_data.append(d) ###Output /data/birds/valid/98/1.jpg /data/birds/valid/68/1.jpg /data/birds/valid/172/1.jpg /data/birds/valid/23/1.jpg /data/birds/valid/179/1.jpg /data/birds/valid/28/1.jpg /data/birds/valid/32/1.jpg /data/birds/valid/87/1.jpg /data/birds/valid/144/1.jpg /data/birds/valid/94/1.jpg /data/birds/valid/143/1.jpg /data/birds/valid/99/1.jpg /data/birds/valid/42/1.jpg /data/birds/valid/35/1.jpg /data/birds/valid/44/1.jpg /data/birds/valid/103/1.jpg /data/birds/valid/145/1.jpg /data/birds/valid/64/1.jpg /data/birds/valid/72/1.jpg /data/birds/valid/63/1.jpg /data/birds/valid/128/1.jpg /data/birds/valid/71/1.jpg /data/birds/valid/1/1.jpg /data/birds/valid/36/1.jpg /data/birds/valid/124/1.jpg /data/birds/valid/105/1.jpg /data/birds/valid/114/1.jpg /data/birds/valid/4/1.jpg /data/birds/valid/52/1.jpg /data/birds/valid/129/1.jpg /data/birds/valid/123/1.jpg /data/birds/valid/137/1.jpg /data/birds/valid/167/1.jpg /data/birds/valid/153/1.jpg /data/birds/valid/16/1.jpg /data/birds/valid/161/1.jpg /data/birds/valid/39/1.jpg /data/birds/valid/76/1.jpg /data/birds/valid/134/1.jpg /data/birds/valid/6/1.jpg /data/birds/valid/46/1.jpg /data/birds/valid/20/1.jpg /data/birds/valid/51/1.jpg /data/birds/valid/9/1.jpg /data/birds/valid/110/1.jpg /data/birds/valid/77/1.jpg /data/birds/valid/13/1.jpg /data/birds/valid/140/1.jpg /data/birds/valid/45/1.jpg /data/birds/valid/86/1.jpg /data/birds/valid/170/1.jpg /data/birds/valid/57/1.jpg /data/birds/valid/31/1.jpg /data/birds/valid/184/1.jpg /data/birds/valid/75/1.jpg /data/birds/valid/118/1.jpg /data/birds/valid/88/1.jpg /data/birds/valid/126/1.jpg /data/birds/valid/12/1.jpg /data/birds/valid/5/1.jpg /data/birds/valid/3/1.jpg /data/birds/valid/59/1.jpg /data/birds/valid/159/1.jpg /data/birds/valid/142/1.jpg /data/birds/valid/135/1.jpg /data/birds/valid/117/1.jpg /data/birds/valid/101/1.jpg /data/birds/valid/150/1.jpg /data/birds/valid/7/1.jpg /data/birds/valid/96/1.jpg /data/birds/valid/107/1.jpg /data/birds/valid/2/1.jpg /data/birds/valid/8/1.jpg /data/birds/valid/116/1.jpg /data/birds/valid/100/1.jpg /data/birds/valid/102/1.jpg /data/birds/valid/66/1.jpg /data/birds/valid/21/1.jpg /data/birds/valid/19/1.jpg /data/birds/valid/33/1.jpg /data/birds/valid/168/1.jpg /data/birds/valid/133/1.jpg /data/birds/valid/89/1.jpg /data/birds/valid/127/1.jpg /data/birds/valid/157/1.jpg /data/birds/valid/55/1.jpg /data/birds/valid/146/1.jpg /data/birds/valid/109/1.jpg /data/birds/valid/30/1.jpg /data/birds/valid/166/1.jpg /data/birds/valid/163/1.jpg /data/birds/valid/149/1.jpg /data/birds/valid/73/1.jpg ###Markdown format image dataset ###Code def parse_image(filename): parts = tf.strings.split(filename, os.sep) label = parts[-2] #print(tf.strings.as_string(label)) label = tf.strings.to_number(label) print(label) image = tf.io.read_file(filename) image = tf.image.decode_jpeg(image) image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, [150, 150]) return image, label traing_ds = tf.data.Dataset.list_files(str(taring_data_)).map(parse_image).shuffle(buffer_size=2000).batch(batch_size=100).repeat() validation_ds = tf.data.Dataset.list_files(str(valid_data_)).map(parse_image).shuffle(buffer_size=2000).batch(batch_size=100).repeat() test_ds = tf.data.Dataset.list_files(str(test_data_)).map(parse_image).shuffle(buffer_size=2000).batch(batch_size=100).repeat() # for i in traing_ds.take(100): # c = i[0] # print(i[1]) # plt.figure() # plt.imshow(c) # plt.colorbar() # #plt.grid(False) # plt.show() # model = models.Sequential() # model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3))) # model.add(MaxPooling2D((2, 2))) # model.add(Dropout(0.2)) # model.add(SeparableConv2D(64, (3, 3), activation='relu', padding="same")) # model.add(MaxPooling2D((2, 2))) # model.add(SeparableConv2D(128, (3, 3), activation='relu', padding="same")) # model.add(MaxPooling2D((2, 2))) # model.add(Dropout(0.2)) # model.add(Flatten()) # model.add(Dense(64, activation='relu')) # model.add(Dense(10)) model = models.Sequential([ Conv2D(16, 3, padding='same', activation='relu', input_shape=(150, 150 ,3)), MaxPooling2D(), Conv2D(32, 3, padding='same', activation='relu'), Conv2D(32, 3, padding='same', activation='relu'), # GaussianDropout(0.02), MaxPooling2D(), Conv2D(64, 3, padding='same', activation='relu'), Conv2D(64, 3, padding='same', activation='relu'), MaxPooling2D(), Conv2D(128, 3, padding='same', activation='relu'), Conv2D(128, 3, padding='same', activation='relu'), Conv2D(128, 3, padding='same', activation='relu'), MaxPooling2D(), #Dropout(0.002), Flatten(), Dense(512, activation='relu'), #Dense(1) ]) model.summary() model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) history =model.fit(traing_ds,epochs=10, steps_per_epoch=1000, validation_data=validation_ds, validation_steps=5) model.evaluate(test_ds, steps=100) history =model.fit(traing_ds,epochs=10, steps_per_epoch=1000, validation_data=validation_ds, validation_steps=5) import matplotlib.pyplot as plt def plot_curves(history): pd.DataFrame(history.history).plot(figsize=(8,5)) print(pd.DataFrame(history.history)) plt.grid(True) plt.gca().set_ylim(0,3.2) plt.show() plot_curves(history) # acc = history.history['accuracy'] # val_acc = history.history['val_accuracy'] # loss = history.history['loss'] # val_loss = history.history['val_loss'] # epochs_range = range(10) # plt.figure(figsize=(8, 8)) # plt.subplot(1, 2, 1) # plt.plot(epochs_range, acc, label='Training Accuracy') # plt.plot(epochs_range, val_acc, label='Validation Accuracy') # plt.legend(loc='lower right') # plt.title('Training and Validation Accuracy') # plt.subplot(1, 2, 2) # plt.plot(epochs_range, loss, label='Training Loss') # plt.plot(epochs_range, val_loss, label='Validation Loss') # plt.legend(loc='upper right') # plt.title('Training and Validation Loss') # plt.show() # acc = new_model.history.history['accuracy'] # val_acc = new_model.history.history['val_accuracy'] # loss = new_model.history.history['loss'] # val_loss = new_model.history.history['val_loss'] # epochs_range = range(10) # plt.figure(figsize=(8, 8)) # plt.subplot(1, 2, 1) # plt.plot(epochs_range, acc, label='Training Accuracy') # plt.plot(epochs_range, val_acc, label='Validation Accuracy') # plt.legend(loc='lower right') # plt.title('Training and Validation Accuracy') # plt.subplot(1, 2, 2) # plt.plot(epochs_range, loss, label='Training Loss') # plt.plot(epochs_range, val_loss, label='Validation Loss') # plt.legend(loc='upper right') # plt.title('Training and Validation Loss') # plt.show() ###Output loss accuracy val_loss val_accuracy 0 5.657499 0.117142 5.641942 0.128 1 5.119687 0.190351 5.779195 0.102 2 5.072755 0.193514 5.696252 0.104 3 5.071297 0.193755 5.905039 0.094 4 5.062468 0.194516 5.764583 0.112 5 5.061470 0.194215 5.854591 0.096 6 5.070397 0.194015 5.788947 0.102 7 5.065960 0.194165 5.836821 0.096 8 5.059340 0.194295 5.916576 0.092 9 5.059459 0.194576 5.830439 0.090 ###Markdown prediction the model ###Code # pre_data = model.predict(pre_dict[0]) #np.argmax(pre_data) #print(pre_dict[1]) #plt.imshow(pre_dict[1]) for i in predict_data: pre_data = model.predict(i) print(np.argmax(pre_data)) ###Output [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] 0 [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] 0 [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] 0 [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] 0 [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] 0 [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] 0
examples/training-demo-data-owner-2-simple.ipynb
###Markdown Confidential ML Training Demo - Data Owner 2This notebook is the Data Owners part of the *Confidential ML Training Demo* showing how a simple logistic regression classifier can be trained while keeping the training data provably confidential. The demo requires the [Training Client API](https://github.com/decentriq/avato-python-client-training) and its dependencies to be installed. 1 - Import dependencies and submission code ###Code import os import example dataowner2_api_token = os.getenv('DATAOWNER2_API_TOKEN') dataowner2_file = "test-data/wine-dataowner2.csv" ###Output _____no_output_____ ###Markdown 2 - Set instance id received from Analyst ###Code instance_id_from_analyst = "46de08acb531c1df9d7c378921b2de77179a54dedd128641d8335ed50d400e32" ###Output _____no_output_____ ###Markdown 3 - Submit Data ###Code example.data_owner_submit_data( dataowner2_api_token, instance_id_from_analyst, dataowner2_file ) ###Output _____no_output_____
docs/notebooks/auto_examples/MAF/plot_2D_0_Rb2O2p25SiO2.ipynb
###Markdown 2D MAF data of Rb2O.2.25SiO2 glass The following example is an application of the statistical learning method indetermining the distribution of the nuclear shielding tensor parameters from a 2Dmagic-angle flipping (MAF) spectrum. In this example, we use the 2D MAF spectrum[f1]_ of $\text{Rb}_2\text{O}\cdot2.25\text{SiO}_2$ glass. Before getting startedImport all relevant packages. ###Code import csdmpy as cp import matplotlib.pyplot as plt import numpy as np from matplotlib import cm from csdmpy import statistics as stats from mrinversion.kernel.nmr import ShieldingPALineshape from mrinversion.kernel.utils import x_y_to_zeta_eta from mrinversion.linear_model import SmoothLassoCV, TSVDCompression from mrinversion.utils import plot_3d, to_Haeberlen_grid ###Output _____no_output_____ ###Markdown Setup for the matplotlib figures. ###Code # function for plotting 2D dataset def plot2D(csdm_object, **kwargs): plt.figure(figsize=(4.5, 3.5)) ax = plt.subplot(projection="csdm") ax.imshow(csdm_object, cmap="gist_ncar_r", aspect="auto", **kwargs) ax.invert_xaxis() ax.invert_yaxis() plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Dataset setup Import the datasetLoad the dataset. In this example, we import the dataset as the CSDMdata-object. ###Code # The 2D MAF dataset in csdm format filename = "https://zenodo.org/record/3964531/files/Rb2O-2_25SiO2-MAF.csdf" data_object = cp.load(filename) # For inversion, we only interest ourselves with the real part of the complex dataset. data_object = data_object.real # We will also convert the coordinates of both dimensions from Hz to ppm. _ = [item.to("ppm", "nmr_frequency_ratio") for item in data_object.dimensions] ###Output _____no_output_____ ###Markdown Here, the variable ``data_object`` is a`CSDM `_object that holds the real part of the 2D MAF dataset. The plot of the 2D MAF datasetis ###Code plot2D(data_object) ###Output _____no_output_____ ###Markdown There are two dimensions in this dataset. The dimension at index 0, the horizontaldimension in the figure, is the pure anisotropic dimension, while the dimension atindex 1, the vertical dimension, is the isotropic chemical shift dimension. Thenumber of coordinates along the respective dimensions is ###Code print(data_object.shape) ###Output _____no_output_____ ###Markdown with 128 points along the anisotropic dimension (index 0) and 512 points along theisotropic chemical shift dimension (index 1). Prepping the data for inversion**Step-1: Data Alignment**When using the csdm objects with the ``mrinversion`` package, the dimension at index0 must be the dimension undergoing the linear inversion. In this example, we plan toinvert the pure anisotropic shielding line-shape. Since the anisotropic dimension in``data_object`` is already at index 0, no further action is required.**Step-2: Optimization**Notice, the signal from the 2D MAF dataset occupies a small fraction of thetwo-dimensional frequency grid. Though you may choose to proceed with the inversiondirectly onto this dataset, it is not computationally optimum. For optimumperformance, trim the dataset to the region of relevant signals. Use the appropriatearray indexing/slicing to select the signal region. ###Code data_object_truncated = data_object[:, 250:285] plot2D(data_object_truncated) ###Output _____no_output_____ ###Markdown In the above code, we truncate the isotropic chemical shifts, dimension at index 1,to coordinate between indexes 250 and 285. The isotropic shift coordinatesfrom the truncated dataset are ###Code print(data_object_truncated.dimensions[1].coordinates) ###Output _____no_output_____ ###Markdown Linear Inversion setup Dimension setupIn a generic linear-inverse problem, one needs to define two sets of dimensions---thedimensions undergoing a linear transformation, and the dimensions onto which theinversion method transforms the data.In the line-shape inversion, the two sets of dimensions are the anisotropic dimensionand the `x`-`y` dimensions.**Anisotropic-dimension:**The dimension of the dataset that holds the pure anisotropic frequencycontributions. In ``mrinversion``, this must always be the dimension at index 0 ofthe data object. ###Code anisotropic_dimension = data_object_truncated.dimensions[0] ###Output _____no_output_____ ###Markdown **x-y dimensions:**The two inverse dimensions corresponding to the `x` and `y`-axis of the `x`-`y` grid. ###Code inverse_dimensions = [ cp.LinearDimension(count=25, increment="400 Hz", label="x"), # the `x`-dimension. cp.LinearDimension(count=25, increment="400 Hz", label="y"), # the `y`-dimension. ] ###Output _____no_output_____ ###Markdown Generating the kernelFor MAF datasets, the line-shape kernel corresponds to the pure nuclear shieldinganisotropy line-shapes. Use the:class:`~mrinversion.kernel.nmr.ShieldingPALineshape` class to generate ashielding line-shape kernel. ###Code lineshape = ShieldingPALineshape( anisotropic_dimension=anisotropic_dimension, inverse_dimension=inverse_dimensions, channel="29Si", magnetic_flux_density="9.4 T", rotor_angle="90°", rotor_frequency="13 kHz", number_of_sidebands=4, ) ###Output _____no_output_____ ###Markdown Here, ``lineshape`` is an instance of the:class:`~mrinversion.kernel.nmr.ShieldingPALineshape` class. The requiredarguments of this class are the `anisotropic_dimension`, `inverse_dimension`, and`channel`. We have already defined the first two arguments in the previoussub-section. The value of the `channel` argument is the nucleus observed in the MAFexperiment. In this example, this value is '29Si'.The remaining arguments, such as the `magnetic_flux_density`, `rotor_angle`,and `rotor_frequency`, are set to match the conditions under which the 2D MAFspectrum was acquired. Note for the MAF measurements the rotor angle is usually$90^\circ$ for the anisotropic dimension. The value of the`number_of_sidebands` argument is the number of sidebands calculated for eachline-shape within the kernel. Unless, you have a lot of spinning sidebands in yourMAF dataset, the value of this argument is generally low. Here, we calculate fourspinning sidebands per line-shape within in the kernel.Once the ShieldingPALineshape instance is created, use the:meth:`~mrinversion.kernel.nmr.ShieldingPALineshape.kernel` method of theinstance to generate the MAF line-shape kernel. ###Code K = lineshape.kernel(supersampling=1) print(K.shape) ###Output _____no_output_____ ###Markdown The kernel ``K`` is a NumPy array of shape (128, 625), where the axes with 128 and625 points are the anisotropic dimension and the features (x-y coordinates)corresponding to the $25\times 25$ `x`-`y` grid, respectively. Data CompressionData compression is optional but recommended. It may reduce the size of theinverse problem and, thus, further computation time. ###Code new_system = TSVDCompression(K, data_object_truncated) compressed_K = new_system.compressed_K compressed_s = new_system.compressed_s print(f"truncation_index = {new_system.truncation_index}") ###Output _____no_output_____ ###Markdown Solving the inverse problem Smooth LASSO cross-validationSolve the smooth-lasso problem. Use the statistical learning ``SmoothLassoCV``method to solve the inverse problem over a range of α and λ values and determinethe best nuclear shielding tensor parameter distribution for the given 2D MAFdataset. Considering the limited build time for the documentation, we'll performthe cross-validation over a smaller $5 \times 5$ `x`-`y` grid. You mayincrease the grid resolution for your problem if desired. ###Code # setup the pre-defined range of alpha and lambda values lambdas = 10 ** (-5.2 - 1.25 * (np.arange(5) / 4)) alphas = 10 ** (-5.5 - 1.25 * (np.arange(5) / 4)) # setup the smooth lasso cross-validation class s_lasso = SmoothLassoCV( alphas=alphas, # A numpy array of alpha values. lambdas=lambdas, # A numpy array of lambda values. sigma=0.0045, # The standard deviation of noise from the 2D MAF data. folds=10, # The number of folds in n-folds cross-validation. inverse_dimension=inverse_dimensions, # previously defined inverse dimensions. verbose=1, # If non-zero, prints the progress as the computation proceeds. ) # run the fit method on the compressed kernel and compressed data. s_lasso.fit(K=compressed_K, s=compressed_s) ###Output _____no_output_____ ###Markdown The optimum hyper-parametersUse the :attr:`~mrinversion.linear_model.SmoothLassoCV.hyperparameters` attribute ofthe instance for the optimum hyper-parameters, $\alpha$ and $\lambda$,determined from the cross-validation. ###Code print(s_lasso.hyperparameters) ###Output _____no_output_____ ###Markdown The cross-validation surfaceOptionally, you may want to visualize the cross-validation error curve/surface. Usethe :attr:`~mrinversion.linear_model.SmoothLassoCV.cross_validation_curve` attributeof the instance, as follows ###Code CV_metric = s_lasso.cross_validation_curve # `CV_metric` is a CSDM object. # plot of the cross validation surface plt.figure(figsize=(5, 3.5)) ax = plt.subplot(projection="csdm") ax.contour(np.log10(CV_metric), levels=25) ax.scatter( -np.log10(s_lasso.hyperparameters["alpha"]), -np.log10(s_lasso.hyperparameters["lambda"]), marker="x", color="k", ) plt.tight_layout(pad=0.5) plt.show() ###Output _____no_output_____ ###Markdown The optimum solutionThe :attr:`~mrinversion.linear_model.SmoothLassoCV.f` attribute of the instance holdsthe solution corresponding to the optimum hyper-parameters, ###Code f_sol = s_lasso.f # f_sol is a CSDM object. ###Output _____no_output_____ ###Markdown where ``f_sol`` is the optimum solution. The fit residualsTo calculate the residuals between the data and predicted data(fit), use the:meth:`~mrinversion.linear_model.SmoothLassoCV.residuals` method, as follows, ###Code residuals = s_lasso.residuals(K=K, s=data_object_truncated) # residuals is a CSDM object. # The plot of the residuals. plot2D(residuals, vmax=data_object_truncated.max(), vmin=data_object_truncated.min()) ###Output _____no_output_____ ###Markdown The standard deviation of the residuals is close to the standard deviation of thenoise, $\sigma = 0.0043$. ###Code residuals.std() ###Output _____no_output_____ ###Markdown Saving the solutionTo serialize the solution (nuclear shielding tensor parameter distribution) to afile, use the `save()` method of the CSDM object, for example, ###Code f_sol.save("Rb2O.2.25SiO2_inverse.csdf") # save the solution residuals.save("Rb2O.2.25SiO2_residue.csdf") # save the residuals ###Output _____no_output_____ ###Markdown Data VisualizationAt this point, we have solved the inverse problem and obtained an optimumdistribution of the nuclear shielding tensor parameters from the 2D MAF dataset. Youmay use any data visualization and interpretation tool of choice for furtheranalysis. In the following sections, we provide minimal visualization and analysisto complete the case study. Visualizing the 3D solution ###Code # Normalize the solution f_sol /= f_sol.max() # Convert the coordinates of the solution, `f_sol`, from Hz to ppm. [item.to("ppm", "nmr_frequency_ratio") for item in f_sol.dimensions] # The 3D plot of the solution plt.figure(figsize=(5, 4.4)) ax = plt.subplot(projection="3d") plot_3d(ax, f_sol, x_lim=[0, 150], y_lim=[0, 150], z_lim=[-50, -150]) plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown From the 3D plot, we observe two distinct regions: one for the $\text{Q}^4$sites and another for the $\text{Q}^3$ sites.Select the respective regions by using the appropriate array indexing, ###Code Q4_region = f_sol[0:7, 0:7, 4:25] Q4_region.description = "Q4 region" Q3_region = f_sol[0:8, 10:24, 11:30] Q3_region.description = "Q3 region" ###Output _____no_output_____ ###Markdown The plot of the respective regions is shown below. ###Code # Calculate the normalization factor for the 2D contours and 1D projections from the # original solution, `f_sol`. Use this normalization factor to scale the intensities # from the sub-regions. max_2d = [ f_sol.sum(axis=0).max().value, f_sol.sum(axis=1).max().value, f_sol.sum(axis=2).max().value, ] max_1d = [ f_sol.sum(axis=(1, 2)).max().value, f_sol.sum(axis=(0, 2)).max().value, f_sol.sum(axis=(0, 1)).max().value, ] plt.figure(figsize=(5, 4.4)) ax = plt.subplot(projection="3d") # plot for the Q4 region plot_3d( ax, Q4_region, x_lim=[0, 150], # the x-limit y_lim=[0, 150], # the y-limit z_lim=[-50, -150], # the z-limit max_2d=max_2d, # normalization factors for the 2D contours projections max_1d=max_1d, # normalization factors for the 1D projections cmap=cm.Reds_r, # colormap box=True, # draw a box around the region ) # plot for the Q3 region plot_3d( ax, Q3_region, x_lim=[0, 150], # the x-limit y_lim=[0, 150], # the y-limit z_lim=[-50, -150], # the z-limit max_2d=max_2d, # normalization factors for the 2D contours projections max_1d=max_1d, # normalization factors for the 1D projections cmap=cm.Blues_r, # colormap box=True, # draw a box around the region ) ax.legend() plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Visualizing the isotropic projections.Because the $\text{Q}^4$ and $\text{Q}^3$ regions are fully resolvedafter the inversion, evaluating the contributions from these regions is trivial.For examples, the distribution of the isotropic chemical shifts for these regions are ###Code # Isotropic chemical shift projection of the 2D MAF dataset. data_iso = data_object_truncated.sum(axis=0) data_iso /= data_iso.max() # normalize the projection # Isotropic chemical shift projection of the tensor distribution dataset. f_sol_iso = f_sol.sum(axis=(0, 1)) # Isotropic chemical shift projection of the tensor distribution for the Q4 region. Q4_region_iso = Q4_region.sum(axis=(0, 1)) # Isotropic chemical shift projection of the tensor distribution for the Q3 region. Q3_region_iso = Q3_region.sum(axis=(0, 1)) # Normalize the three projections. f_sol_iso_max = f_sol_iso.max() f_sol_iso /= f_sol_iso_max Q4_region_iso /= f_sol_iso_max Q3_region_iso /= f_sol_iso_max # The plot of the different projections. plt.figure(figsize=(5.5, 3.5)) ax = plt.subplot(projection="csdm") ax.plot(f_sol_iso, "--k", label="tensor") ax.plot(Q4_region_iso, "r", label="Q4") ax.plot(Q3_region_iso, "b", label="Q3") ax.plot(data_iso, "-k", label="MAF") ax.plot(data_iso - f_sol_iso - 0.1, "gray", label="residuals") ax.set_title("Isotropic projection") ax.invert_xaxis() plt.legend() plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown AnalysisFor the analysis, we use the`statistics `_module of the csdmpy package. Following is the moment analysis of the 3D volumes forboth the $\text{Q}^4$ and $\text{Q}^3$ regions up to the second moment. ###Code int_Q4 = stats.integral(Q4_region) # volume of the Q4 distribution mean_Q4 = stats.mean(Q4_region) # mean of the Q4 distribution std_Q4 = stats.std(Q4_region) # standard deviation of the Q4 distribution int_Q3 = stats.integral(Q3_region) # volume of the Q3 distribution mean_Q3 = stats.mean(Q3_region) # mean of the Q3 distribution std_Q3 = stats.std(Q3_region) # standard deviation of the Q3 distribution print("Q4 statistics") print(f"\tpopulation = {100 * int_Q4 / (int_Q4 + int_Q3)}%") print("\tmean\n\t\tx:\t{0}\n\t\ty:\t{1}\n\t\tiso:\t{2}".format(*mean_Q4)) print("\tstandard deviation\n\t\tx:\t{0}\n\t\ty:\t{1}\n\t\tiso:\t{2}".format(*std_Q4)) print("Q3 statistics") print(f"\tpopulation = {100 * int_Q3 / (int_Q4 + int_Q3)}%") print("\tmean\n\t\tx:\t{0}\n\t\ty:\t{1}\n\t\tiso:\t{2}".format(*mean_Q3)) print("\tstandard deviation\n\t\tx:\t{0}\n\t\ty:\t{1}\n\t\tiso:\t{2}".format(*std_Q3)) ###Output _____no_output_____ ###Markdown The statistics shown above are according to the respective dimensions, that is, the`x`, `y`, and the isotropic chemical shifts. To convert the `x` and `y` statisticsto commonly used $\zeta_\sigma$ and $\eta_\sigma$ statistics, use the:func:`~mrinversion.kernel.utils.x_y_to_zeta_eta` function. ###Code mean_ζη_Q3 = x_y_to_zeta_eta(*mean_Q3[0:2]) # error propagation for calculating the standard deviation std_ζ = (std_Q3[0] * mean_Q3[0]) ** 2 + (std_Q3[1] * mean_Q3[1]) ** 2 std_ζ /= mean_Q3[0] ** 2 + mean_Q3[1] ** 2 std_ζ = np.sqrt(std_ζ) std_η = (std_Q3[1] * mean_Q3[0]) ** 2 + (std_Q3[0] * mean_Q3[1]) ** 2 std_η /= (mean_Q3[0] ** 2 + mean_Q3[1] ** 2) ** 2 std_η = (4 / np.pi) * np.sqrt(std_η) print("Q3 statistics") print(f"\tpopulation = {100 * int_Q3 / (int_Q4 + int_Q3)}%") print("\tmean\n\t\tζ:\t{0}\n\t\tη:\t{1}\n\t\tiso:\t{2}".format(*mean_ζη_Q3, mean_Q3[2])) print( "\tstandard deviation\n\t\tζ:\t{0}\n\t\tη:\t{1}\n\t\tiso:\t{2}".format( std_ζ, std_η, std_Q3[2] ) ) ###Output _____no_output_____ ###Markdown Result cross-verificationThe reported value for the Qn-species distribution from Baltisberger `et. al.` [f1]_is listed below and is consistent with the above result... list-table:: :widths: 7 15 28 25 25 :header-rows: 1 * - Species - Yield - Isotropic chemical shift, $\delta_\text{iso}$ - Shielding anisotropy, $\zeta_\sigma$: - Shielding asymmetry, $\eta_\sigma$: * - Q4 - $11.0 \pm 0.3$ % - $-98.0 \pm 5.64$ ppm - 0 ppm (fixed) - 0 (fixed) * - Q3 - $89 \pm 0.1$ % - $-89.5 \pm 4.65$ ppm - 80.7 ppm with a 6.7 ppm Gaussian broadening - 0 (fixed) Convert the 3D tensor distribution in Haeberlen parametersYou may re-bin the 3D tensor parameter distribution from a$\rho(\delta_\text{iso}, x, y)$ distribution to$\rho(\delta_\text{iso}, \zeta_\sigma, \eta_\sigma)$ distribution as follows. ###Code # Create the zeta and eta dimensions,, as shown below. zeta = cp.as_dimension(np.arange(40) * 4 - 40, unit="ppm", label="zeta") eta = cp.as_dimension(np.arange(16) / 15, label="eta") # Use the `to_Haeberlen_grid` function to convert the tensor parameter distribution. fsol_Hae = to_Haeberlen_grid(f_sol, zeta, eta) ###Output _____no_output_____ ###Markdown The 3D plot ###Code plt.figure(figsize=(5, 4.4)) ax = plt.subplot(projection="3d") plot_3d(ax, fsol_Hae, x_lim=[0, 1], y_lim=[-40, 120], z_lim=[-50, -150], alpha=0.4) plt.tight_layout() plt.show() ###Output _____no_output_____
training/distributed_training/pytorch/data_parallel/mnist/pytorch_smdataparallel_mnist_demo.ipynb
###Markdown Distributed data parallel MNIST training with PyTorch and SMDataParallel BackgroundSMDataParallel is a new capability in Amazon SageMaker to train deep learning models faster and cheaper. SMDataParallel is a distributed data parallel training framework for PyTorch. This notebook example shows how to use SMDataParallel with PyTorch in SageMaker using MNIST dataset.For more information:1. [PyTorch in SageMaker](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html)2. [SMDataParallel PyTorch API Specification](https://sagemaker.readthedocs.io/en/stable/api/training/smd_data_parallel_pytorch.html)3. [Getting started with SMDataParallel on SageMaker](https://sagemaker.readthedocs.io/en/stable/api/training/smd_data_parallel.html)**NOTE:** This example requires SageMaker Python SDK v2.X. DatasetThis example uses the MNIST dataset. MNIST is a widely used dataset for handwritten digit classification. It consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). SageMaker execution rolesThe IAM role arn used to give training and hosting access to your data. See the [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the sagemaker.get_execution_role() with the appropriate full IAM role arn string(s). ###Code pip install sagemaker --upgrade import sagemaker sagemaker_session = sagemaker.Session() role = sagemaker.get_execution_role() ###Output _____no_output_____ ###Markdown Model training with SMDataParallel Training scriptThe MNIST dataset is downloaded using the `torchvision.datasets` PyTorch module; you can see how this is implemented in the `train_pytorch_smdataparallel_mnist.py` training script that is printed out in the next cell.The training script provides the code you need for distributed data parallel (DDP) training using SMDataParallel. The training script is very similar to a PyTorch training script you might run outside of SageMaker, but modified to run with SMDataParallel. SMDataParallel's PyTorch client provides an alternative to PyTorch's native DDP. For details about how to use SMDataParallel's DDP in your native PyTorch script, see the Getting Started with SMDataParallel tutorials. ###Code !pygmentize code/train_pytorch_smdataparallel_mnist.py ###Output _____no_output_____ ###Markdown Estimator function optionsIn the following code block, you can update the estimator function to use a different instance type, instance count, and distrubtion strategy. You're also passing in the training script you reviewed in the previous cell.**Instance types**SMDataParallel supports model training on SageMaker with the following instance types only:1. ml.p3.16xlarge1. ml.p3dn.24xlarge [Recommended]1. ml.p4d.24xlarge [Recommended]**Instance count**To get the best performance and the most out of SMDataParallel, you should use at least 2 instances, but you can also use 1 for testing this example.**Distribution strategy**Note that to use DDP mode, you update the the `distribution` strategy, and set it to use `smdistributed dataparallel`. ###Code from sagemaker.pytorch import PyTorch estimator = PyTorch(base_job_name='pytorch-smdataparallel-mnist', source_dir='code', entry_point='train_pytorch_smdataparallel_mnist.py', role=role, framework_version='1.6.0', py_version='py36', # For training with multinode distributed training, set this count. Example: 2 instance_count=2, # For training with p3dn instance use - ml.p3dn.24xlarge, with p4dn instance use - ml.p4d.24xlarge instance_type= 'ml.p3.16xlarge', sagemaker_session=sagemaker_session, # Training using SMDataParallel Distributed Training Framework distribution={'smdistributed':{ 'dataparallel':{ 'enabled': True } } }, debugger_hook_config=False) estimator.fit() ###Output _____no_output_____ ###Markdown Next stepsNow that you have a trained model, you can deploy an endpoint to host the model. After you deploy the endpoint, you can then test it with inference requests. The following cell will store the model_data variable to be used with the inference notebook. ###Code model_data = estimator.model_data print("Storing {} as model_data".format(model_data)) %store model_data ###Output _____no_output_____ ###Markdown Part 1: Distributed data parallel MNIST training with PyTorch and SageMaker distributed Background[Amazon SageMaker's distributed library](https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html) can be used to train deep learning models faster and cheaper. The [data parallel](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel.html) feature in this library is a distributed data parallel training framework for PyTorch, TensorFlow, and MXNet. This notebook demonstrates how to use the SageMaker distributed data library to train a PyTorch model using the MNIST dataset.This notebook example shows how to use `smdistributed.dataparallel` with PyTorch in SageMaker using MNIST dataset.For more information:1. [SageMaker distributed data parallel PyTorch API Specification](https://sagemaker.readthedocs.io/en/stable/api/training/smd_data_parallel_pytorch.html)1. [Getting started with SageMaker distributed data parallel](https://sagemaker.readthedocs.io/en/stable/api/training/smd_data_parallel.html)1. [PyTorch in SageMaker](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html) DatasetThis example uses the MNIST dataset. MNIST is a widely used dataset for handwritten digit classification. It consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). **NOTE:** This example requires SageMaker Python SDK v2.X. ###Code pip install sagemaker --upgrade ###Output _____no_output_____ ###Markdown SageMaker roleThe following code cell defines `role` which is the IAM role ARN used to create and run SageMaker training and hosting jobs. This is the same IAM role used to create this SageMaker Notebook instance. `role` must have permission to create a SageMaker training job and launch an endpoint to host a model. For granular policies you can use to grant these permissions, see [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). If you do not require fine-tuned permissions for this demo, you can used the IAM managed policy AmazonSageMakerFullAccess to complete this demo. ###Code import sagemaker sagemaker_session = sagemaker.Session() role = sagemaker.get_execution_role() role_name = role.split(["/"][-1]) print(f"The Amazon Resource Name (ARN) of the role used for this demo is: {role}") print(f"The name of the role used for this demo is: {role_name[-1]}") ###Output _____no_output_____ ###Markdown To verify that the role above has required permissions:1. Go to the IAM console: https://console.aws.amazon.com/iam/home.2. Select **Roles**.3. Enter the role name in the search box to search for that role. 4. Select the role.5. Use the **Permissions** tab to verify this role has required permissions attached. Model training with SageMaker distributed data parallel Training scriptThe MNIST dataset is downloaded using the `torchvision.datasets` PyTorch module; you can see how this is implemented in the `train_pytorch_smdataparallel_mnist.py` training script that is printed out in the next cell.The training script provides the code you need for distributed data parallel (DDP) training using SageMaker's distributed data parallel library (`smdistributed.dataparallel`). The training script is very similar to a PyTorch training script you might run outside of SageMaker, but modified to run with the `smdistributed.dataparallel` library. This library's PyTorch client provides an alternative to PyTorch's native DDP. For details about how to use `smdistributed.dataparallel`'s DDP in your native PyTorch script, see the [Modify a PyTorch Training Script Using SMD Data Parallel](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel-modify-sdp.htmldata-parallel-modify-sdp-pt). ###Code !pygmentize code/train_pytorch_smdataparallel_mnist.py ###Output _____no_output_____ ###Markdown Estimator function optionsIn the following code block, you can update the estimator function to use a different instance type, instance count, and distrubtion strategy. You're also passing in the training script you reviewed in the previous cell to this estimator.**Instance types**`smdistributed.dataparallel` supports model training on SageMaker with the following instance types only. For best performance, it is recommended you use an instance type that supports Amazon Elastic Fabric Adapter (ml.p3dn.24xlarge and ml.p4d.24xlarge).1. ml.p3.16xlarge1. ml.p3dn.24xlarge [Recommended]1. ml.p4d.24xlarge [Recommended]**Instance count**To get the best performance and the most out of `smdistributed.dataparallel`, you should use at least 2 instances, but you can also use 1 for testing this example.**Distribution strategy**Note that to use DDP mode, you update the the `distribution` strategy, and set it to use `smdistributed dataparallel`. ###Code from sagemaker.pytorch import PyTorch estimator = PyTorch( base_job_name="pytorch-smdataparallel-mnist", source_dir="code", entry_point="train_pytorch_smdataparallel_mnist.py", role=role, framework_version="1.8.1", py_version="py36", # For training with multinode distributed training, set this count. Example: 2 instance_count=1, # For training with p3dn instance use - ml.p3dn.24xlarge, with p4dn instance use - ml.p4d.24xlarge instance_type="ml.p3dn.24xlarge", sagemaker_session=sagemaker_session, # Training using SMDataParallel Distributed Training Framework distribution={"smdistributed": {"dataparallel": {"enabled": True}}}, debugger_hook_config=False, ) estimator.fit() ###Output _____no_output_____ ###Markdown Next stepsNow that you have a trained model, you can deploy an endpoint to host the model. After you deploy the endpoint, you can then test it with inference requests. The following cell will store the model_data variable to be used with the inference notebook. ###Code model_data = estimator.model_data print("Storing {} as model_data".format(model_data)) %store model_data ###Output _____no_output_____ ###Markdown Distributed data parallel MNIST training with PyTorch and SMDataParallel BackgroundSMDataParallel is a new capability in Amazon SageMaker to train deep learning models faster and cheaper. SMDataParallel is a distributed data parallel training framework for PyTorch. This notebook example shows how to use SMDataParallel with PyTorch in SageMaker using MNIST dataset.For more information:1. [PyTorch in SageMaker](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html)2. [SMDataParallel PyTorch API Specification](https://sagemaker.readthedocs.io/en/stable/api/training/smd_data_parallel_pytorch.html)3. [Getting started with SMDataParallel on SageMaker](https://sagemaker.readthedocs.io/en/stable/api/training/smd_data_parallel.html)**NOTE:** This example requires SageMaker Python SDK v2.X. DatasetThis example uses the MNIST dataset. MNIST is a widely used dataset for handwritten digit classification. It consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). SageMaker execution rolesThe IAM role arn used to give training and hosting access to your data. See the [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the sagemaker.get_execution_role() with the appropriate full IAM role arn string(s). ###Code pip install sagemaker --upgrade import sagemaker sagemaker_session = sagemaker.Session() role = sagemaker.get_execution_role() ###Output _____no_output_____ ###Markdown Model training with SMDataParallel Training scriptThe MNIST dataset is downloaded using the `torchvision.datasets` PyTorch module; you can see how this is implemented in the `train_pytorch_smdataparallel_mnist.py` training script that is printed out in the next cell.The training script provides the code you need for distributed data parallel (DDP) training using SMDataParallel. The training script is very similar to a PyTorch training script you might run outside of SageMaker, but modified to run with SMDataParallel. SMDataParallel's PyTorch client provides an alternative to PyTorch's native DDP. For details about how to use SMDataParallel's DDP in your native PyTorch script, see the Getting Started with SMDataParallel tutorials. ###Code !pygmentize code/train_pytorch_smdataparallel_mnist.py ###Output _____no_output_____ ###Markdown Estimator function optionsIn the following code block, you can update the estimator function to use a different instance type, instance count, and distrubtion strategy. You're also passing in the training script you reviewed in the previous cell.**Instance types**SMDataParallel supports model training on SageMaker with the following instance types only:1. ml.p3.16xlarge1. ml.p3dn.24xlarge [Recommended]1. ml.p4d.24xlarge [Recommended]**Instance count**To get the best performance and the most out of SMDataParallel, you should use at least 2 instances, but you can also use 1 for testing this example.**Distribution strategy**Note that to use DDP mode, you update the the `distribution` strategy, and set it to use `smdistributed dataparallel`. ###Code from sagemaker.pytorch import PyTorch estimator = PyTorch(base_job_name='pytorch-smdataparallel-mnist', source_dir='code', entry_point='train_pytorch_smdataparallel_mnist.py', role=role, framework_version='1.8.0', py_version='py36', # For training with multinode distributed training, set this count. Example: 2 instance_count=2, # For training with p3dn instance use - ml.p3dn.24xlarge, with p4dn instance use - ml.p4d.24xlarge instance_type= 'ml.p3.16xlarge', sagemaker_session=sagemaker_session, # Training using SMDataParallel Distributed Training Framework distribution={'smdistributed':{ 'dataparallel':{ 'enabled': True } } }, debugger_hook_config=False) estimator.fit() ###Output _____no_output_____ ###Markdown Next stepsNow that you have a trained model, you can deploy an endpoint to host the model. After you deploy the endpoint, you can then test it with inference requests. The following cell will store the model_data variable to be used with the inference notebook. ###Code model_data = estimator.model_data print("Storing {} as model_data".format(model_data)) %store model_data ###Output _____no_output_____ ###Markdown Distributed data parallel MNIST training with PyTorch and SMDataParallel BackgroundSMDataParallel is a new capability in Amazon SageMaker to train deep learning models faster and cheaper. SMDataParallel is a distributed data parallel training framework for PyTorch. This notebook example shows how to use SMDataParallel with PyTorch in SageMaker using MNIST dataset.For more information:1. [PyTorch in SageMaker](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html)2. [SMDataParallel PyTorch API Specification] 3. [Getting started with SMDataParallel on SageMaker] **NOTE:** This example requires SageMaker Python SDK v2.X. DatasetThis example uses the MNIST dataset. MNIST is a widely used dataset for handwritten digit classification. It consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). SageMaker execution rolesThe IAM role arn used to give training and hosting access to your data. See the [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the sagemaker.get_execution_role() with the appropriate full IAM role arn string(s). ###Code pip install sagemaker --upgrade import sagemaker sagemaker_session = sagemaker.Session() role = sagemaker.get_execution_role() ###Output _____no_output_____ ###Markdown Model training with SMDataParallel Training scriptThe MNIST dataset is downloaded using the `torchvision.datasets` PyTorch module; you can see how this is implemented in the `train_pytorch_smdataparallel_mnist.py` training script that is printed out in the next cell.The training script provides the code you need for distributed data parallel (DDP) training using SMDataParallel. The training script is very similar to a PyTorch training script you might run outside of SageMaker, but modified to run with SMDataParallel. SMDataParallel's PyTorch client provides an alternative to PyTorch's native DDP. For details about how to use SMDataParallel's DDP in your native PyTorch script, see the Getting Started with SMDataParallel tutorials. ###Code !pygmentize code/train_pytorch_smdataparallel_mnist.py ###Output _____no_output_____ ###Markdown Estimator function optionsIn the following code block, you can update the estimator function to use a different instance type, instance count, and distrubtion strategy. You're also passing in the training script you reviewed in the previous cell.**Instance types**SMDataParallel supports model training on SageMaker with the following instance types only:1. ml.p3.16xlarge1. ml.p3dn.24xlarge [Recommended]1. ml.p4d.24xlarge [Recommended]**Instance count**To get the best performance and the most out of SMDataParallel, you should use at least 2 instances, but you can also use 1 for testing this example.**Distribution strategy**Note that to use DDP mode, you update the the `distribution` strategy, and set it to use `smdistributed dataparallel`. ###Code from sagemaker.pytorch import PyTorch estimator = PyTorch(base_job_name='pytorch-smdataparallel-mnist', source_dir='code', entry_point='train_pytorch_smdataparallel_mnist.py', role=role, framework_version='1.6.0', py_version='py36', # For training with multinode distributed training, set this count. Example: 2 instance_count=2, # For training with p3dn instance use - ml.p3dn.24xlarge instance_type= 'ml.p3.16xlarge', sagemaker_session=sagemaker_session, # Training using SMDataParallel Distributed Training Framework distribution={'smdistributed':{ 'dataparallel':{ 'enabled': True } } }, debugger_hook_config=False) estimator.fit() ###Output _____no_output_____ ###Markdown Next stepsNow that you have a trained model, you can deploy an endpoint to host the model. After you deploy the endpoint, you can then test it with inference requests. The following cell will store the model_data variable to be used with the inference notebook. ###Code model_data = estimator.model_data print("Storing {} as model_data".format(model_data)) %store model_data ###Output _____no_output_____ ###Markdown Part 1: Distributed data parallel MNIST training with PyTorch and SageMaker distributed Background[Amazon SageMaker's distributed library](https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html) can be used to train deep learning models faster and cheaper. The [data parallel](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel.html) feature in this library is a distributed data parallel training framework for PyTorch, TensorFlow, and MXNet. This notebook demonstrates how to use the SageMaker distributed data library to train a PyTorch model using the MNIST dataset.This notebook example shows how to use `smdistributed.dataparallel` with PyTorch in SageMaker using MNIST dataset.For more information:1. [SageMaker distributed data parallel PyTorch API Specification](https://sagemaker.readthedocs.io/en/stable/api/training/smd_data_parallel_pytorch.html)1. [Getting started with SageMaker distributed data parallel](https://sagemaker.readthedocs.io/en/stable/api/training/smd_data_parallel.html)1. [PyTorch in SageMaker](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html) DatasetThis example uses the MNIST dataset. MNIST is a widely used dataset for handwritten digit classification. It consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). **NOTE:** This example requires SageMaker Python SDK v2.X. ###Code pip install sagemaker --upgrade ###Output _____no_output_____ ###Markdown SageMaker roleThe following code cell defines `role` which is the IAM role ARN used to create and run SageMaker training and hosting jobs. This is the same IAM role used to create this SageMaker Notebook instance. `role` must have permission to create a SageMaker training job and launch an endpoint to host a model. For granular policies you can use to grant these permissions, see [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). If you do not require fine-tuned permissions for this demo, you can used the IAM managed policy AmazonSageMakerFullAccess to complete this demo. ###Code import sagemaker sagemaker_session = sagemaker.Session() role = sagemaker.get_execution_role() role_name = role.split(['/'][-1]) print(f'The Amazon Resource Name (ARN) of the role used for this demo is: {role}') print(f'The name of the role used for this demo is: {role_name[-1]}') ###Output _____no_output_____ ###Markdown To verify that the role above has required permissions:1. Go to the IAM console: https://console.aws.amazon.com/iam/home.2. Select **Roles**.3. Enter the role name in the search box to search for that role. 4. Select the role.5. Use the **Permissions** tab to verify this role has required permissions attached. Model training with SageMaker distributed data parallel Training scriptThe MNIST dataset is downloaded using the `torchvision.datasets` PyTorch module; you can see how this is implemented in the `train_pytorch_smdataparallel_mnist.py` training script that is printed out in the next cell.The training script provides the code you need for distributed data parallel (DDP) training using SageMaker's distributed data parallel library (`smdistributed.dataparallel`). The training script is very similar to a PyTorch training script you might run outside of SageMaker, but modified to run with the `smdistributed.dataparallel` library. This library's PyTorch client provides an alternative to PyTorch's native DDP. For details about how to use `smdistributed.dataparallel`'s DDP in your native PyTorch script, see the [Modify a PyTorch Training Script Using SMD Data Parallel](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel-modify-sdp.htmldata-parallel-modify-sdp-pt). ###Code !pygmentize code/train_pytorch_smdataparallel_mnist.py ###Output _____no_output_____ ###Markdown Estimator function optionsIn the following code block, you can update the estimator function to use a different instance type, instance count, and distrubtion strategy. You're also passing in the training script you reviewed in the previous cell to this estimator.**Instance types**`smdistributed.dataparallel` supports model training on SageMaker with the following instance types only.1. ml.p3.16xlarge1. ml.p3dn.24xlarge [Recommended]1. ml.p4d.24xlarge [Recommended]**Instance count**To get the best performance and the most out of `smdistributed.dataparallel`, you should use at least 2 instances, but you can also use 1 for testing this example.**Distribution strategy**Note that to use DDP mode, you update the the `distribution` strategy, and set it to use `smdistributed dataparallel`. ###Code from sagemaker.pytorch import PyTorch estimator = PyTorch(base_job_name='pytorch-smdataparallel-mnist', source_dir='code', entry_point='train_pytorch_smdataparallel_mnist.py', role=role, framework_version='1.8.1', py_version='py36', # For training with multinode distributed training, set this count. Example: 2 instance_count=1, # For training with p3dn instance use - ml.p3dn.24xlarge, with p4dn instance use - ml.p4d.24xlarge instance_type='ml.p3dn.24xlarge', sagemaker_session=sagemaker_session, # Training using SMDataParallel Distributed Training Framework distribution={'smdistributed':{ 'dataparallel':{ 'enabled': True } } }, debugger_hook_config=False) estimator.fit() ###Output _____no_output_____ ###Markdown Next stepsNow that you have a trained model, you can deploy an endpoint to host the model. After you deploy the endpoint, you can then test it with inference requests. The following cell will store the model_data variable to be used with the inference notebook. ###Code model_data = estimator.model_data print("Storing {} as model_data".format(model_data)) %store model_data ###Output _____no_output_____ ###Markdown Part 1: Distributed data parallel MNIST training with PyTorch and SageMaker distributed Background[Amazon SageMaker's distributed library](https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html) can be used to train deep learning models faster and cheaper. The [data parallel](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel.html) feature in this library is a distributed data parallel training framework for PyTorch, TensorFlow, and MXNet. This notebook demonstrates how to use the SageMaker distributed data library to train a PyTorch model using the MNIST dataset.This notebook example shows how to use `smdistributed.dataparallel` with PyTorch in SageMaker using MNIST dataset.For more information:1. [SageMaker distributed data parallel PyTorch API Specification](https://sagemaker.readthedocs.io/en/stable/api/training/smd_data_parallel_pytorch.html)1. [Getting started with SageMaker distributed data parallel](https://sagemaker.readthedocs.io/en/stable/api/training/smd_data_parallel.html)1. [PyTorch in SageMaker](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html) DatasetThis example uses the MNIST dataset. MNIST is a widely used dataset for handwritten digit classification. It consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). **NOTE:** This example requires SageMaker Python SDK v2.X. ###Code pip install sagemaker --upgrade ###Output _____no_output_____ ###Markdown SageMaker roleThe following code cell defines `role` which is the IAM role ARN used to create and run SageMaker training and hosting jobs. This is the same IAM role used to create this SageMaker Notebook instance. `role` must have permission to create a SageMaker training job and launch an endpoint to host a model. For granular policies you can use to grant these permissions, see [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). If you do not require fine-tuned permissions for this demo, you can used the IAM managed policy AmazonSageMakerFullAccess to complete this demo. ###Code import sagemaker sagemaker_session = sagemaker.Session() role = sagemaker.get_execution_role() role_name = role.split(['/'][-1]) print(f'The Amazon Resource Name (ARN) of the role used for this demo is: {role}') print(f'The name of the role used for this demo is: {role_name[-1]}') ###Output _____no_output_____ ###Markdown To verify that the role above has required permissions:1. Go to the IAM console: https://console.aws.amazon.com/iam/home.2. Select **Roles**.3. Enter the role name in the search box to search for that role. 4. Select the role.5. Use the **Permissions** tab to verify this role has required permissions attached. Model training with SageMaker distributed data parallel Training scriptThe MNIST dataset is downloaded using the `torchvision.datasets` PyTorch module; you can see how this is implemented in the `train_pytorch_smdataparallel_mnist.py` training script that is printed out in the next cell.The training script provides the code you need for distributed data parallel (DDP) training using SageMaker's distributed data parallel library (`smdistributed.dataparallel`). The training script is very similar to a PyTorch training script you might run outside of SageMaker, but modified to run with the `smdistributed.dataparallel` library. This library's PyTorch client provides an alternative to PyTorch's native DDP. For details about how to use `smdistributed.dataparallel`'s DDP in your native PyTorch script, see the [Modify a PyTorch Training Script Using SMD Data Parallel](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel-modify-sdp.htmldata-parallel-modify-sdp-pt). ###Code !pygmentize code/train_pytorch_smdataparallel_mnist.py ###Output _____no_output_____ ###Markdown Estimator function optionsIn the following code block, you can update the estimator function to use a different instance type, instance count, and distrubtion strategy. You're also passing in the training script you reviewed in the previous cell to this estimator.**Instance types**`smdistributed.dataparallel` supports model training on SageMaker with the following instance types only. For best performance, it is recommended you use an instance type that supports [Amazon Elastic Fabric Adapter](https://aws.amazon.com/hpc/efa/) (ml.p3dn.24xlarge and ml.p4d.24xlarge).1. ml.p3.16xlarge1. ml.p3dn.24xlarge [Recommended]1. ml.p4d.24xlarge [Recommended]**Instance count**To get the best performance and the most out of `smdistributed.dataparallel`, you should use at least 2 instances, but you can also use 1 for testing this example.**Distribution strategy**Note that to use DDP mode, you update the the `distribution` strategy, and set it to use `smdistributed dataparallel`. ###Code from sagemaker.pytorch import PyTorch estimator = PyTorch(base_job_name='pytorch-smdataparallel-mnist', source_dir='code', entry_point='train_pytorch_smdataparallel_mnist.py', role=role, framework_version='1.8.1', py_version='py36', # For training with multinode distributed training, set this count. Example: 2 instance_count=1, # For training with p3dn instance use - ml.p3dn.24xlarge, with p4dn instance use - ml.p4d.24xlarge instance_type='ml.p3dn.24xlarge', sagemaker_session=sagemaker_session, # Training using SMDataParallel Distributed Training Framework distribution={'smdistributed':{ 'dataparallel':{ 'enabled': True } } }, debugger_hook_config=False) estimator.fit() ###Output _____no_output_____ ###Markdown Next stepsNow that you have a trained model, you can deploy an endpoint to host the model. After you deploy the endpoint, you can then test it with inference requests. The following cell will store the model_data variable to be used with the inference notebook. ###Code model_data = estimator.model_data print("Storing {} as model_data".format(model_data)) %store model_data ###Output _____no_output_____ ###Markdown Distributed data parallel MNIST training with PyTorch and SMDataParallel BackgroundSMDataParallel is a new capability in Amazon SageMaker to train deep learning models faster and cheaper. SMDataParallel is a distributed data parallel training framework for PyTorch. This notebook example shows how to use SMDataParallel with PyTorch in SageMaker using MNIST dataset.For more information:1. [PyTorch in SageMaker](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html)2. [SMDataParallel PyTorch API Specification] 3. [Getting started with SMDataParallel on SageMaker] **NOTE:** This example requires SageMaker Python SDK v2.X. DatasetThis example uses the MNIST dataset. MNIST is a widely used dataset for handwritten digit classification. It consists of 70,000 labeled 28x28 pixel grayscale images of hand-written digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits). SageMaker execution rolesThe IAM role arn used to give training and hosting access to your data. See the [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the sagemaker.get_execution_role() with the appropriate full IAM role arn string(s). ###Code import sagemaker sagemaker_session = sagemaker.Session() role = sagemaker.get_execution_role() ###Output _____no_output_____ ###Markdown Model training with SMDataParallel Training scriptThe MNIST dataset is downloaded using the `torchvision.datasets` PyTorch module; you can see how this is implemented in the `train_pytorch_smdataparallel_mnist.py` training script that is printed out in the next cell.The training script provides the code you need for distributed data parallel (DDP) training using SMDataParallel. The training script is very similar to a PyTorch training script you might run outside of SageMaker, but modified to run with SMDataParallel. SMDataParallel's PyTorch client provides an alternative to PyTorch's native DDP. For details about how to use SMDataParallel's DDP in your native PyTorch script, see the Getting Started with SMDataParallel tutorials. ###Code !pygmentize code/train_pytorch_smdataparallel_mnist.py ###Output _____no_output_____ ###Markdown Estimator function optionsIn the following code block, you can update the estimator function to use a different instance type, instance count, and distrubtion strategy. You're also passing in the training script you reviewed in the previous cell.**Instance types**SMDataParallel supports model training on SageMaker with the following instance types only:1. ml.p3.16xlarge1. ml.p3dn.24xlarge [Recommended]1. ml.p4d.24xlarge [Recommended]**Instance count**To get the best performance and the most out of SMDataParallel, you should use at least 2 instances, but you can also use 1 for testing this example.**Distribution strategy**Note that to use DDP mode, you update the the `distribution` strategy, and set it to use `smdistributed dataparallel`. ###Code from sagemaker.pytorch import PyTorch estimator = PyTorch(base_job_name='pytorch-smdataparallel-mnist', source_dir='code', entry_point='train_pytorch_smdataparallel_mnist.py', role=role, framework_version='1.6.0', py_version='py3', # For training with multinode distributed training, set this count. Example: 2 instance_count=2, # For training with p3dn instance use - ml.p3dn.24xlarge instance_type= 'ml.p3.16xlarge', sagemaker_session=sagemaker_session, # Training using SMDataParallel Distributed Training Framework distribution={'smdistributed':{ 'dataparallel':{ 'enabled': True } } }, debugger_hook_config=False) estimator.fit() ###Output _____no_output_____ ###Markdown Next stepsNow that you have a trained model, you can deploy an endpoint to host the model. After you deploy the endpoint, you can then test it with inference requests. The following cell will store the model_data variable to be used with the inference notebook. ###Code model_data = estimator.model_data print("Storing {} as model_data".format(model_data)) %store model_data ###Output _____no_output_____
appendix/algo_app/classical_optimization.ipynb
###Markdown Trusted Notebook" width="250 px" align="left"> _*VQE algorithm: application to optimization problems*_ The latest version of this notebook is available on https://github.com/QISKit/qiskit-tutorial.*** ContributorsAntonio Mezzacapo, Jay Gambetta, Kristan Temme, Ramis Movassagh, Albert Frisch IntroductionMany problems in quantitative fields such as finance and engineering are optimization problems. Optimization problems lay at the core of complex decision-making and definition of strategies. Optimization (or combinatorial optimization) means searching for an optimal solution in a finite or countably infinite set of potential solutions. Optimality is defined with respect to some criterion function, which is to be minimized or maximized. This is typically called cost function or objective function. **Typical optimization problems**Minimization: cost, distance, length of a traversal, weight, processing time, material, energy consumption, number of objectsMaximization: profit, value, output, return, yield, utility, efficiency, capacity, number of objects We consider here two problems of practical interest in many fields, and show how they can mapped and solved on quantum computers. Weighted MaxCutMaxCut is an NP-complete problem, with applications in clustering, network science, and statistical physics. To grasp how practical applications are mapped into given MaxCut instances, consider a system of many people that can interact and influence each other. Individuals can be represented by vertices of a graph, and their interactions seen as pairwise connections between vertices of the graph, or edges. With this representation in mind, it is easy to model typical marketing problems. For example, suppose that it is assumed that individuals will influence each other's buying decisions, and knowledge is given about how strong they will influence each other. The influence can be modeled by weights assigned on each edge of the graph. It is possible then to predict the outcome of a marketing strategy in which products are offered for free to some individuals, and then ask which is the optimal subset of individuals that should get the free products, in order to maximize revenues.The formal definition of this problem is the following:Consider an $n$-node non-directed graph *G = (V, E)* where *|V| = n* with edge weights $w_{ij}>0$, $w_{ij}=w_{ji}$, for $(i, j)\in E$. A cut is defined as a partition of the original set V into two subsets. The cost function to be optimized is in this case the sum of weights of edges connecting points in the two different subsets, *crossing* the cut. By assigning $x_i=0$ or $x_i=1$ to each node $i$, one tries to maximize the global profit function (here and in the following summations run over indices 0,1,...n-1)$$C(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j).$$In our simple marketing model, $w_{ij}$ represents the probability that the person $j$ will buy a product after $i$ gets a free one. Note that the weights $w_{ij}$ can in principle be greater than $1$, corresponding to the case where the individual $j$ will buy more than one product. Maximizing the total buying probability corresponds to maximizing the total future revenues. In the case where the profit probability will be greater than the cost of the initial free samples, the strategy is a convenient one. An extension to this model has the nodes themselves carry weights, which can be regarded, in our marketing model, as the likelihood that a person granted with a free sample of the product will buy it again in the future. With this additional information in our model, the objective function to maximize becomes $$C(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j)+\sum_i w_i x_i. $$ In order to find a solution to this problem on a quantum computer, one needs first to map it to an Ising Hamiltonian. This can be done with the assignment $x_i\rightarrow (1-Z_i)/2$, where $Z_i$ is the Pauli Z operator that has eigenvalues $\pm 1$. Doing this we find that $$C(\textbf{Z}) = \sum_{i<j} \frac{w_{ij}}{2} (1-Z_i)(1+Z_j) + \sum_i w_i (1-Z_i)/2 = -\frac{1}{2}\left( \sum_{i<j} w_{ij} Z_iZ_j +\sum_i w_i Z_i\right)+\mathrm{const},$$where const = $\sum_{i<j}w_{ij}/2+\sum_i w_i/2 $. In other terms, the weighted MaxCut problem is equivalent to minimizing the Ising Hamiltonian $$ H = \sum_i w_i Z_i + \sum_{i<j} w_{ij} Z_iZ_j.$$ Traveling Salesman ProblemIn addition to being a notorious NP-complete problem that has drawn the attention of computer scientists and mathematicians for over two centuries, the Traveling Salesman Problem (TSP) has important bearings on finance and marketing, as its name suggests. Colloquially speaking, the traveling salesman is a person that goes from city to city to sell merchandise. The objective in this case is to find the shortest path that would enable the salesman to visit all the cities and return to its hometown, i.e. the city where he started traveling. By doing this, the salesman gets to maximize potential sales in the least amount of time. The problem derives its importance from its "hardness" and ubiquitous equivalence to other relevant combinatorial optimization problems that arise in practice. The mathematical formulation with some early analysis was proposed by W.R. Hamilton in the early 19th century. Mathematically the problem is, as in the case of MaxCut, best abstracted in terms of graphs. The TSP on the nodes of a graph asks for the shortest *Hamiltonian cycle* that can be taken through each of the nodes. A Hamilton cycle is a closed path that uses every vertex of a graph once. The general solution is unknown and an algorithm that finds it efficiently (e.g., in polynomial time) is not expected to exist.Find the shortest Hamiltonian cycle in a graph $G=(V,E)$ with $n=|V|$ nodes and distances, $w_{ij}$ (distance from vertex $i$ to vertex $j$). A Hamiltonian cycle is described by $N^2$ variables $x_{i,p}$, where $i$ represents the node and $p$ represents its order in a prospective cycle. The decision variable takes the value 1 if the solution occurs at node $i$ at time order $p$. We require that every node can only appear once in the cycle, and for each time a node has to occur. This amounts to the two constraints (here and in the following, whenever not specified, the summands run over 0,1,...N-1)$$\sum_{i} x_{i,p} = 1 ~~\forall p$$$$\sum_{p} x_{i,p} = 1 ~~\forall i.$$For nodes in our prospective ordering, if $x_{i,p}$ and $x_{j,p+1}$ are both 1, then there should be an energy penalty if $(i,j) \notin E$ (not connected in the graph). The form of this penalty is $$\sum_{i,j\notin E}\sum_{p} x_{i,p}x_{j,p+1}>0,$$ where it is assumed the boundary condition of the Hamiltonian cycle $(p=N)\equiv (p=0)$. However, here it will be assumed a fully connected graph and not include this term. The distance that needs to be minimized is $$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}.$$Putting this all together in a single objective function to be minimized, we get the following:$$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}+ A\sum_p\left(1- \sum_i x_{i,p}\right)^2+A\sum_i\left(1- \sum_p x_{i,p}\right)^2,$$where $A$ is a free parameter. One needs to ensure that $A$ is large enough so that these constraints are respected. One way to do this is to choose $A$ such that $A > \mathrm{max}(w_{ij})$. Furthermore, since the problem has the salesperson returning to the original city, it is possible, without loss of generality, to set $x_{00} = 1$, $x_{i0} = 0 \; \forall i\neq 0$, and $x_{0p} = 0 \;\forall p\neq 0$. Doing this, the objective functions becomes $$C(\textbf{x})=\sum_{i,j=1}^{N-1}w_{ij}\sum_{p=1}^{N-1} x_{i,p}x_{j,p+1}+\sum_{j=1}^{N-1}w_{0j} x_{j,1}+\sum_{i=1}^{N-1}w_{i0} x_{i,N-1}+ A\sum_{p=1}^{N-1}\left(1- \sum_{i=1}^{N-1} x_{i,p}\right)^2+A\sum_{i=1}^{N-1}\left(1- \sum_{p=1}^{N-1} x_{i,p}\right)^2.$$Once again, it is easy to map the problem in this form to a quantum computer, and the solution will be found by minimizing a Ising Hamiltonian. Approximate Universal Quantum Computing for Optimization ProblemsThere has been a considerable amount of interest in recent times about the use of quantum computers to find a solution to combinatorial problems. It is important to say that, given the classical nature of combinatorial problems, exponential speedup in using quantum computers compared to the best classical algorithms is not guaranteed. However, due to the nature and importance of the target problems, it is worth investigating heuristic approaches on a quantum computer that could indeed speed up some problem instances. Here we demonstrate an approach that is based on the Quantum Approximate Optimization Algorithm by Farhi, Goldstone, and Gutman (2014). We frame the algorithm in the context of *approximate quantum computing*, given its heuristic nature. The Algorithm works as follows:1. Choose the $w_i$ and $w_{ij}$ in the target Ising problem. In principle, even higher powers of Z are allowed.2. Choose the depth of the quantum circuit $m$. Note that the depth can be modified adaptively.3. Choose a set of controls $\theta$ and make a trial function $|\psi(\boldsymbol\theta)\rangle$, built using a quantum circuit made of C-Phase gates and single-qubit Y rotations, parameterized by the components of $\boldsymbol\theta$. 4. Evaluate $C(\boldsymbol\theta) = \langle\psi(\boldsymbol\theta)~|H|~\psi(\boldsymbol\theta)\rangle = \sum_i w_i \langle\psi(\boldsymbol\theta)~|Z_i|~\psi(\boldsymbol\theta)\rangle+ \sum_{i<j} w_{ij} \langle\psi(\boldsymbol\theta)~|Z_iZ_j|~\psi(\boldsymbol\theta)\rangle$ by sampling the outcome of the circuit in the Z-basis and adding the expectation values of the individual Ising terms together. In general, different control points around $\boldsymbol\theta$ have to be estimated, depending on the classical optimizer chosen. 5. Use a classical optimizer to choose a new set of controls.6. Continue until $C(\boldsymbol\theta)$ reaches a minimum, close enough to the solution $\boldsymbol\theta^*$.7. Use the last $\boldsymbol\theta$ to generate a final set of samples from the distribution $|\langle z_i~|\psi(\boldsymbol\theta)\rangle|^2\;\forall i$ to obtain the answer. It is our belief the difficulty of finding good heuristic algorithms will come down to the choice of an appropriate trial wavefunction. For example, one could consider a trial function whose entanglement best aligns with the target problem, or simply make the amount of entanglement a variable. In this tutorial, we will consider a simple trial function of the form$$|\psi(\theta)\rangle = [U_\mathrm{single}(\boldsymbol\theta) U_\mathrm{entangler}]^m |+\rangle$$where $U_\mathrm{entangler}$ is a collection of C-Phase gates (fully entangling gates), and $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, where $n$ is the number of qubits and $m$ is the depth of the quantum circuit. The motivation for this choice is that for these classical problems this choice allows us to search over the space of quantum states that have only real coefficients, still exploiting the entanglement to potentially converge faster to the solution.One advantage of using this sampling method compared to adiabatic approaches is that the target Ising Hamiltonian does not have to be implemented directly on hardware, allowing this algorithm not to be limited to the connectivity of the device. Furthermore, higher-order terms in the cost function, such as $Z_iZ_jZ_k$, can also be sampled efficiently, whereas in adiabatic or annealing approaches they are generally impractical to deal with. References:- A. Lucas, Frontiers in Physics 2, 5 (2014)- E. Farhi, J. Goldstone, S. Gutmann e-print arXiv 1411.4028 (2014)- D. Wecker, M. B. Hastings, M. Troyer Phys. Rev. A 94, 022309 (2016)- E. Farhi, J. Goldstone, S. Gutmann, H. Neven e-print arXiv 1703.06199 (2017) ###Code # useful additional packages import matplotlib.pyplot as plt import matplotlib.axes as axes %matplotlib inline import numpy as np from scipy import linalg as la from itertools import permutations from functools import partial import networkx as nx # importing the QISKit from qiskit import QuantumCircuit, QuantumProgram ################# import Qconfig and set APIToken and API url and prepare backends ############ try: import sys sys.path.append("../../") # go to parent dir import Qconfig qx_config = { "APItoken": Qconfig.APItoken, "url": Qconfig.config['url']} except Exception as e: print(e) qx_config = { "APItoken":"YOUR_TOKEN_HERE", "url":"https://quantumexperience.ng.bluemix.net/api"} #set api from IBMQuantumExperience import IBMQuantumExperience api = IBMQuantumExperience(token=qx_config['APItoken'], config={'url': qx_config['url']}) #prepare remote backends from qiskit.backends import discover_local_backends, discover_remote_backends, get_backend_instance remote_backends = discover_remote_backends(api) #we have to call this to connect to remote backends local_backends = discover_local_backends() print("Remote Backends:") print(remote_backends) print("Local Backends") print(local_backends) ################### end of preparing backends ######################## # import basic plot tools from qiskit.tools.visualization import plot_histogram # import optimization tools from qiskit.tools.apps.optimization import trial_circuit_ry, SPSA_optimization, SPSA_calibration from qiskit.tools.apps.optimization import Energy_Estimate, make_Hamiltonian, eval_hamiltonian, group_paulis from qiskit.tools.qi.pauli import Pauli def obj_funct(Q_program, pauli_list, entangler_map, coupling_map, initial_layout, n, m, backend, shots, theta): """ Evaluate the objective function for a classical optimization problem. Q_program is an instance object of the class quantum program pauli_list defines the cost function as list of ising terms with weights theta are the control parameters n is the number of qubits m is the depth of the trial function backend is the type of backend to run it on shots is the number of shots to run. Taking shots = 1 only works in simulation and computes an exact average of the cost function on the quantum state """ std_cost=0 # to add later circuits = ['trial_circuit'] if shots==1: Q_program.add_circuit('trial_circuit', trial_circuit_ry(n, m, theta, entangler_map, None, False)) result = Q_program.execute(circuits, backend=backend, coupling_map=coupling_map, initial_layout=initial_layout, shots=shots) state = result.get_data('trial_circuit')['quantum_state'] cost=Energy_Estimate_Exact(state,pauli_list,True) else: Q_program.add_circuit('trial_circuit', trial_circuit_ry(n, m, theta, entangler_map, None, True)) result = Q_program.execute(circuits, backend=backend, coupling_map=coupling_map, initial_layout=initial_layout, shots=shots) data = result.get_counts('trial_circuit') cost = Energy_Estimate(data, pauli_list) return cost, std_cost ###Output _____no_output_____ ###Markdown MaxCut on 4 Qubits ###Code # Generating a graph of 4 nodes n =4 # Number of nodes in graph G=nx.Graph() G.add_nodes_from(np.arange(0,n,1)) elist=[(0,1,1.0),(0,2,1.0),(0,3,1.0),(1,2,1.0),(2,3,1.0)] # tuple is (i,j,weight) where (i,j) is the edge G.add_weighted_edges_from(elist) colors = ['r' for node in G.nodes()] default_axes = plt.axes(frameon=True) default_axes.set_xlim(-0.1,1.1) default_axes.set_ylim(-0.1,1.1) nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes) # Computing the weight matrix from the random graph w = np.zeros([n,n]) for i in range(n): for j in range(n): temp = G.get_edge_data(i,j,default=0) if temp != 0: w[i,j] = temp['weight'] print(w) ###Output [[0. 1. 1. 1.] [1. 0. 1. 0.] [1. 1. 0. 1.] [1. 0. 1. 0.]] ###Markdown Brute force approachTry all possible $2^n$ combinations. For $n = 4$, as in this example, one deals with only 16 combinations, but for n = 1000, one has 1.071509e+30 combinations, which is impractical to deal with by using a brute force approach. ###Code best_cost_brute = 0 for b in range(2**n): x = [int(t) for t in reversed(list(bin(b)[2:].zfill(n)))] cost = 0 for i in range(n): for j in range(n): cost = cost + w[i,j]*x[i]*(1-x[j]) if best_cost_brute < cost: best_cost_brute = cost xbest_brute = x print('case = ' + str(x)+ ' cost = ' + str(cost)) colors = [] for i in range(n): if xbest_brute[i] == 0: colors.append('r') else: colors.append('b') nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8) #plt.show() print('\nBest solution = ' + str(xbest_brute) + ' cost = ' + str(best_cost_brute)) ###Output case = [0, 0, 0, 0] cost = 0.0 case = [1, 0, 0, 0] cost = 3.0 case = [0, 1, 0, 0] cost = 2.0 case = [1, 1, 0, 0] cost = 3.0 case = [0, 0, 1, 0] cost = 3.0 case = [1, 0, 1, 0] cost = 4.0 case = [0, 1, 1, 0] cost = 3.0 case = [1, 1, 1, 0] cost = 2.0 case = [0, 0, 0, 1] cost = 2.0 case = [1, 0, 0, 1] cost = 3.0 case = [0, 1, 0, 1] cost = 4.0 case = [1, 1, 0, 1] cost = 3.0 case = [0, 0, 1, 1] cost = 3.0 case = [1, 0, 1, 1] cost = 2.0 case = [0, 1, 1, 1] cost = 3.0 case = [1, 1, 1, 1] cost = 0.0 Best solution = [1, 0, 1, 0] cost = 4.0 ###Markdown Mapping to the Ising problem ###Code # Determining the constant shift and initialize a pauli_list that contains the ZZ Ising terms pauli_list = [] cost_shift = 0 for i in range(n): for j in range(i): if w[i,j] != 0: cost_shift = cost_shift + w[i,j] wp = np.zeros(n) vp = np.zeros(n) vp[n-i-1] = 1 vp[n-j-1] = 1 pauli_list.append((w[i,j],Pauli(vp,wp))) cost_shift ###Output _____no_output_____ ###Markdown Checking that the full Hamiltonian gives the right cost ###Code #Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector H = make_Hamiltonian(pauli_list) we, ve = la.eigh(H, eigvals=(0, 1)) exact = we[0] exact_maxcut = -we[0]/2+cost_shift/2 print(exact_maxcut) print(exact) H = np.diag(H) ###Output 4.0 -3.0 ###Markdown Running it on quantum computerWe run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$. ###Code #Setting up a quantum program and connecting to the Quantum Experience API Q_program = QuantumProgram() # set the APIToken and API url Q_program.set_api(Qconfig.APItoken, Qconfig.config['url']) # Testing Optimization on a quantum computer # Quantum circuit parameters: # the entangler step is made of two-qubit gates between a control and target qubit, control: [target] entangler_map = {0: [1], 1: [2], 2: [3]} # the coupling_maps gates allowed on the device coupling_map = None # the layout of the qubits initial_layout = None # the backend used for the quantum computation backend = 'local_qiskit_simulator' # Total number of trial steps used in the optimization max_trials = 100; n = 4 # the number of qubits # Depth of the quantum circuit that prepares the trial state m = 3 # initial starting point for the control angles initial_theta=np.random.randn(m*n) # number of shots for each evaluation of the cost function (shots=1 corresponds to perfect evaluation, # only available on the simulator) shots = 1 # choose to plot the results of the optimizations every save_steps save_step = 1 """ ########################## RUN OPTIMIZATION ####################### if shots == 1: obj_funct_partial = partial(obj_funct, Q_program, pauli_list, entangler_map, coupling_map, initial_layout, n, m, backend, shots) initial_c=0.01 else: obj_funct_partial = partial(obj_funct, Q_program, pauli_list, entangler_map, coupling_map, initial_layout, n, m, backend, shots) initial_c=0.1 target_update=2*np.pi*0.1 SPSA_parameters=SPSA_calibration(obj_funct_partial,initial_theta,initial_c,target_update,25) print ('SPSA parameters = ' + str(SPSA_parameters)) best_distance_quantum, best_theta, cost_plus, cost_minus,_,_ = SPSA_optimization(obj_funct_partial, initial_theta, SPSA_parameters, max_trials, save_step) """ def cost_function(Q_program,H,n,m,entangler_map,shots,device,theta): return eval_hamiltonian(Q_program,H,trial_circuit_ry(n,m,theta,entangler_map,None,False),shots,device).real initial_c=0.1 target_update=2*np.pi*0.1 save_step = 1 if shots !=1: H=group_paulis(pauli_list) SPSA_params = SPSA_calibration(partial(cost_function,Q_program,H,n,m,entangler_map, shots,backend),initial_theta,initial_c,target_update,25) best_distance_quantum, best_theta, cost_plus, cost_minus, _, _ = SPSA_optimization(partial(cost_function,Q_program,H,n,m,entangler_map,shots,backend), initial_theta,SPSA_params,max_trials,save_step,1); plt.plot(np.arange(0, max_trials,save_step),cost_plus,label='C(theta_plus)') plt.plot(np.arange(0, max_trials,save_step),cost_minus,label='C(theta_minus)') plt.plot(np.arange(0, max_trials,save_step),np.ones(max_trials//save_step)*best_distance_quantum, label='Final Optimized Cost') plt.plot(np.arange(0, max_trials,save_step),np.ones(max_trials//save_step)*exact, label='Exact Cost') plt.legend() plt.xlabel('Number of trials') plt.ylabel('Cost') shots = 5000 circuits = ['final_circuit'] Q_program.add_circuit('final_circuit', trial_circuit_ry(n, m, best_theta, entangler_map, None, True)) result = Q_program.execute(circuits, backend=backend, shots=shots, coupling_map=coupling_map, initial_layout=initial_layout) data = result.get_counts('final_circuit') plot_histogram(data,5) # Getting the solution and cost from the largest component of the optimal quantum state max_value = max(data.values()) # maximum value max_keys = [k for k, v in data.items() if v == max_value] # getting all keys containing the `maximum` x_quantum=np.zeros(n) for bit in range(n): if max_keys[0][bit]=='1': x_quantum[bit]=1 best_cost_quantum = 0 for i in range(n): for j in range(n): best_cost_quantum+= w[i,j]*x_quantum[i]*(1-x_quantum[j]) # Plot the quantum solution colors = [] for i in range(n): if x_quantum[i] == 0: colors.append('r') else: colors.append('b') nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8) print('Best solution from the quantum optimization is = ' +str(x_quantum)+ ' with cost = ' + str(best_cost_quantum)) ###Output Best solution from the quantum optimization is = [1. 0. 1. 0.] with cost = 4.0 ###Markdown Traveling Salesman for 4 cities (9 qubits)For the second problem we consider the traveling salesman problem on N=4 cities. In this case there are (N-1)! two different combinations. ###Code # Random choice of the cities/nodes N = 4 xc = (np.random.rand(N)-0.5)*10 yc = (np.random.rand(N)-0.5)*10 plt.scatter(xc, yc, s=200) for i in range(len(xc)): plt.annotate(i,(xc[i]+0.15,yc[i]),size=16,color='r') plt.show() # Getting the distances w = np.zeros([N,N]) for i in range(N): for j in range(N): w[i,j]= np.sqrt((xc[i]-xc[j])**2+(yc[i]-yc[j])**2) ###Output _____no_output_____ ###Markdown Brute force approachThe brute force approach consists of trying all the paths given by all the permutations of cities/nodes. The number of permutations of N cities/nodes is (N-1)!, which gives for N = 4 paths = 6N = 8 paths = 5040N = 16 paths = 1.3076744e+12 ###Code a=list(permutations(range(1,N))) last_best_distance = 10000000 for i in a: distance = 0 pre_j = 0 for j in i: distance = distance + w[j,pre_j] pre_j = j distance = distance + w[0,pre_j] order = (0,) + i if distance < last_best_distance: best_order = order last_best_distance = distance print('order = ' + str(order) + ' Distance = ' + str(distance)) best_distance_brute = last_best_distance best_order_brute = best_order plt.scatter(xc, yc) xbest = np.array([xc[i] for i in best_order_brute]) xbest = np.append(xbest,xbest[0]) ybest = np.array([yc[i] for i in best_order_brute]) ybest = np.append(ybest,ybest[0]) plt.plot(xbest, ybest, 'b.-', ms = 40) plt.plot(xc[0], yc[0], 'r*', ms = 20) for i in range(len(xc)): plt.annotate(i,(xc[i]+0.2,yc[i]),size=16,color='r') plt.show() print('Best order from brute force = ' + str(best_order_brute) + ' with total distance = ' + str(best_distance_brute)) ###Output order = (0, 1, 2, 3) Distance = 22.55520524719503 order = (0, 1, 3, 2) Distance = 23.03267031162882 order = (0, 2, 1, 3) Distance = 22.386438040024558 order = (0, 2, 3, 1) Distance = 23.032670311628824 order = (0, 3, 1, 2) Distance = 22.386438040024554 order = (0, 3, 2, 1) Distance = 22.555205247195033 ###Markdown Mapping to binary variables and simulated annealing Recall from the introduction that the cost function of the TSP mapped to binary variables is of the form:$$C(\textbf{x})=\sum_{i,j=1}^{N-1}w_{ij}\sum_{p=1}^{N-1} x_{i,p}x_{j,p+1}+\sum_{j=1}^{N-1}w_{0j} x_{j,1}+\sum_{i=1}^{N-1}w_{i0} x_{i,N-1}+ A\sum_{p=1}^{N-1}\left(1- \sum_{i=1}^{N-1} x_{i,p}\right)^2+A\sum_{i=1}^{N-1}\left(1- \sum_{p=1}^{N-1} x_{i,p}\right)^2.$$ ###Code n=(N-1)**2 # number of qubits A = np.max(w)*100 # A parameter of cost function # takes the part of w matrix excluding the 0-th point, which is the starting one wsave = w[1:N,1:N] # nearest-neighbor interaction matrix for the prospective cycle (p,p+1 interaction) shift = np.zeros([N-1,N-1]) shift = la.toeplitz([0,1,0], [0,1,0])/2 # the first and last point of the TSP problem are fixed by initial and final conditions firststep = np.zeros([N-1]) firststep[0] = 1; laststep = np.zeros([N-1]) laststep[N-2] = 1; # The binary variables that define a path live in a tensor product space of position and ordering indices # Q defines the interactions between variables Q = np.kron(shift,wsave) + np.kron(A*np.ones((N-1, N-1)), np.identity(N-1)) + np.kron(np.identity(N-1),A*np.ones((N-1, N-1))) # G defines the contribution from the individual variables G = np.kron(firststep,w[0,1:N]) + np.kron(laststep,w[1:N,0]) - 4*A*np.kron(np.ones(N-1),np.ones(N-1)) # M is the constant offset M = 2*A*(N-1) # Evaluates the cost distance from a binary representation of a path fun = lambda x: np.dot(np.around(x),np.dot(Q,np.around(x)))+np.dot(G,np.around(x))+M def get_order_tsp(x): # This function takes in a TSP state, an array of (N-1)^2 binary variables, and returns the # corresponding travelling path associated to it order = [0] for p in range(N-1): for j in range(N-1): if x[(N-1)*p+j]==1: order.append(j+1) return order def get_x_tsp(order): # This function takes in a traveling path and returns a TSP state, in the form of an array of (N-1)^2 # binary variables x = np.zeros((len(order)-1)**2) for j in range(1,len(order)): p=order[j] x[(N-1)*(j-1)+(p-1)]=1 return x # Checking if the best results from the brute force approach are correct for the mapped system of binary variables # Conversion from a path to a binary variable array xopt_brute =get_x_tsp(best_order_brute) print('Best path from brute force mapped to binary variables: \n') print(xopt_brute) flag=False for i in range(100000): rd = np.random.randint(2, size=n) if fun(rd) < (best_distance_brute-0.0001): print('\n A random solution is better than the brute-force one. The path measures') print(fun(rd)) flag=True if flag==False: print('\nCheck with 10^5 random solutions: the brute-force solution mapped to binary variables is correct.\n') print('Shortest path evaluated with binary variables: ') print(fun(xopt_brute)) # Optimization with simulated annealing initial_x = np.random.randint(2, size=n) cost = fun(initial_x) x = np.copy(initial_x) alpha = 0.999 temp = 10 for j in range(10000): # pick a random index and flip the bit associated with it flip=np.random.randint(len(x)) new_x = np.copy(x) new_x[flip]=(x[flip]+1)%2 # compute cost function with flipped bit new_cost=fun(new_x) if np.exp(-(new_cost-cost)/temp) > np.random.rand(): x = np.copy(new_x) cost = new_cost temp= temp*alpha print('distance = ' + str(cost) + ' x_solution = ' + str(x) + ', final temperature= ' + str(temp)) best_order_sim_ann=get_order_tsp(x) plt.scatter(xc, yc) xbest = np.array([xc[i] for i in best_order_sim_ann]) xbest=np.append(xbest,xbest[0]) ybest = np.array([yc[i] for i in best_order_sim_ann]) ybest=np.append(ybest,ybest[0]) plt.plot(xbest, ybest, 'b.-', ms = 40) plt.plot(xc[0], yc[0], 'r*', ms = 20) for i in range(len(xc)): plt.annotate(i,(xc[i]+0.15,yc[i]),size=16,color='r') plt.show() print('Best order from simulated annealing = ' + str(best_order_sim_ann) + ' with total distance = ' + str(cost)) ###Output distance = 22.555205247195772 x_solution = [0 0 1 0 1 0 1 0 0], final temperature= 0.00045173345977048254 ###Markdown Mapping to Z variables and simulation on a quantum computer ###Code # Defining the new matrices in the Z-basis Iv=np.ones((N-1)**2) Qz = (Q/4) Gz =( -G/2-np.dot(Iv,Q/4)-np.dot(Q/4,Iv)) Mz = (M+np.dot(G/2,Iv)+np.dot(Iv,np.dot(Q/4,Iv))) Mz = Mz + np.trace(Qz) Qz = Qz - np.diag(np.diag(Qz)) # Recall the change of variables is # x = (1-z)/2 # z = -2x+1 z= -(2*xopt_brute)+Iv for i in range(1000): rd = 1-2*np.random.randint(2, size=n) if np.dot(rd,np.dot(Qz,rd))+np.dot(Gz,rd)+Mz < (best_distance_brute-0.0001): print(np.dot(rd,np.dot(Qz,rd))+np.dot(Gz,rd)+Mz) # Getting the Hamiltonian in the form of a list of Pauli terms pauli_list = [] for i in range(n): if Gz[i] != 0: wp = np.zeros(n) vp = np.zeros(n) vp[i] = 1 pauli_list.append((Gz[i],Pauli(vp,wp))) for i in range(n): for j in range(i): if Qz[i,j] != 0: wp = np.zeros(n) vp = np.zeros(n) vp[i] = 1 vp[j] = 1 pauli_list.append((2*Qz[i,j],Pauli(vp,wp))) pauli_list.append((Mz,Pauli(np.zeros(n),np.zeros(n)))) # Making the Hamiltonian as a full matrix and finding its lowest eigenvalue H = make_Hamiltonian(pauli_list) we, v = la.eigh(H, eigvals=(0,0)) exact = we[0] print(exact) H=np.diag(H) #Setting up a quantum program and connecting to the Quantum Experience API Q_program = QuantumProgram() # set the APIToken and API url Q_program.set_api(Qconfig.APItoken, Qconfig.config['url']) # Optimization of the TSP using a quantum computer # Quantum circuit parameters # the entangler step is made of two-qubit gates between a control and target qubit, control: [target] coupling_map = None # the coupling_maps gates allowed on the device entangler_map = {0: [1], 1: [2], 2: [3], 3: [4], 4: [5], 5: [6], 6: [7], 7: [8]} # the layout of the qubits initial_layout = None # the backend used for the quantum computation backend = 'local_qiskit_simulator' # Total number of trial steps used in the optimization max_trials = 1500; n = 9 # the number of qubits # Depth of the quantum circuit that prepares the trial state m = 5 # initial starting point for the control angles initial_theta=np.random.randn(m*n) # number of shots for each evaluation of the cost function (shots=1 corresponds to perfect evaluation, # only available on the simulator) shots = 1 # choose to plot the results of the optimizations every save_steps save_step = 1 """ ########################## RUN OPTIMIZATION ####################### if shots == 1: obj_funct_partial = partial(obj_funct, Q_program, pauli_list, entangler_map, coupling_map, initial_layout, n, m, backend, shots) initial_c=0.01 else: obj_funct_partial = partial(obj_funct, Q_program, pauli_list, entangler_map, coupling_map, initial_layout, n, m, backenddevice, shots) initial_c=0.1 target_update=2*np.pi*0.1 SPSA_parameters=SPSA_calibration(obj_funct_partial,initial_theta,initial_c,target_update,25) print ('SPSA parameters = ' + str(SPSA_parameters)) best_distance_quantum, best_theta, cost_plus, cost_minus,_,_ = SPSA_optimization(obj_funct_partial, initial_theta, SPSA_parameters, max_trials, save_step) """ def cost_function(Q_program,H,n,m,entangler_map,shots,device,theta): return eval_hamiltonian(Q_program,H,trial_circuit_ry(n,m,theta,entangler_map,None,False),shots,device).real initial_c=0.1 target_update=2*np.pi*0.1 save_step = 1 if shots !=1: H=group_paulis(pauli_list) SPSA_params = SPSA_calibration(partial(cost_function,Q_program,H,n,m,entangler_map, shots,backend),initial_theta,initial_c,target_update,25) best_distance_quantum, best_theta, cost_plus, cost_minus, _, _ = SPSA_optimization(partial(cost_function,Q_program,H,n,m,entangler_map,shots,backend), initial_theta,SPSA_params,max_trials,save_step,1); """ ########################## PLOT RESULTS #######################""" plt.plot(np.arange(0, max_trials,save_step),cost_plus,label='C(theta_plus)') plt.plot(np.arange(0, max_trials,save_step),cost_minus,label='C(theta_minus)') plt.plot(np.arange(0, max_trials,save_step),(np.ones(max_trials//save_step)*best_distance_quantum), label='Final Cost') plt.plot(np.arange(0, max_trials,save_step),np.ones(max_trials//save_step)*exact, label='Exact Cost') plt.legend() plt.xlabel('Number of trials') plt.ylabel('Cost') # Sampling from the quantum state generated with the optimal angles from the quantum optimization shots = 100 circuits = ['final_circuit'] Q_program.add_circuit('final_circuit', trial_circuit_ry(n, m, best_theta, entangler_map,None,True)) result = Q_program.execute(circuits, backend=backend, shots=shots, coupling_map=coupling_map, initial_layout=initial_layout) data = result.get_counts('final_circuit') plot_histogram(data,5) # Getting path and total distance from the largest component of the quantum state max_value = max(data.values()) # maximum value max_keys = [k for k, v in data.items() if v == max_value] # getting all keys containing the `maximum` x_quantum=np.zeros(n) for bit in range(n): if max_keys[0][bit]=='1': x_quantum[bit]=1 quantum_order = get_order_tsp(list(map(int, x_quantum))) best_distance_quantum_amp=fun(x_quantum) plt.scatter(xc, yc) xbest = np.array([xc[i] for i in quantum_order]) xbest = np.append(xbest,xbest[0]) ybest = np.array([yc[i] for i in quantum_order]) ybest = np.append(ybest,ybest[0]) plt.plot(xbest, ybest, 'b.-', ms = 40) plt.plot(xc[0], yc[0], 'r*', ms = 20) for i in range(len(xc)): plt.annotate(i,(xc[i]+0.15,yc[i]),size=14,color='r') plt.show() print('Best order from quantum optimization is = ' + str(quantum_order) + ' with total distance = ' + str(best_distance_quantum_amp)) ###Output _____no_output_____
T5_MultiLabel/T5_MultiLabel.ipynb
###Markdown Multilabel ClassificationIn multi-label classification, a given text sequence should be labeled with the correct subset of a set of pre-defined labels (note that the subset can include both the null set and the full set of labels itself). For this, we will be using the Toxic Comments dataset where each text can be labeled with any subset of the labels - toxic, severe_toxic, obscene, threat, insult, identity_hate. 1. Mounting the drive and navigating to the resource folder.The toxic comments database has been stored in the path - ``` data/multilabel_classfication``` ###Code cd /content/drive/MyDrive/Colab Notebooks/T5_Multilabel import pandas as pd import json from sklearn.model_selection import train_test_split ###Output _____no_output_____ ###Markdown Before you proceed, please move the dataset to the ideal location using the following steps1. Download the [Toxic Comments dataset](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/).2. Extract the csv files to data/multilabel_classification 2. Preprocessing The DataThe inputs and outputs of a T5 model is always text. A particular task is specified by using a prefix text that lets the model know what it should do with the input. The input data format for a T5 model in Simple Transformers reflects this fact. The input is a Pandas dataframe with the 3 columns — `prefix`, `input_text`, and ```target_text```.In the following cell, we convert our data to test and train dataframe with the `prefix` as `multilabel_classification`. Further, the test-to-train ratio chosen is 1:9. Once the dataframes are created, we run a sanity check to ensure that all of the data in the dataframes is in text format. ###Code prefix = "data/multilabel_classification/" multi_train_df = pd.read_csv(prefix + 'train.csv') multi_train_df["comment_text"].str.replace('\n', ' ').str.replace('\t', ' ') for col in multi_train_df.columns: if col not in ["id", "comment_text"]: multi_train_df[col] = multi_train_df[col].apply(lambda x: col if x else "") multi_train_df["target_text"] = multi_train_df['toxic'].str.cat(multi_train_df[[col for col in multi_train_df.columns if col not in ["id", "comment_text", "toxic"]]], sep=',') multi_train_df["target_text"] = multi_train_df["target_text"].apply(lambda x: ",".join(word for word in x.split(",") if word)).apply(lambda x: x if x else "clean") multi_train_df["input_text"] = multi_train_df["comment_text"].str.replace('\n', ' ') multi_train_df["prefix"] = "multilabel classification" multi_train_df = multi_train_df[["prefix", "input_text", "target_text"]] multi_train_df, multi_eval_df = train_test_split(multi_train_df, test_size=0.1) multi_train_df.head() train_df = pd.concat([multi_train_df]).astype(str) eval_df = pd.concat([multi_eval_df]).astype(str) train_df.to_csv("data/train.tsv", "\t") eval_df.to_csv("data/eval.tsv", "\t") ###Output _____no_output_____ ###Markdown 3. Creating Pretrained Instance of T5 ModelWe will be using the [Simple Transformers library](https://github.com/ThilinaRajapakse/simpletransformers) which is based on the [Hugging Face Transformers](https://github.com/huggingface/transformers) to train the T5 model.The instructions given below will install all the requirements.- Install Anaconda or Miniconda Package Manager from [here](https://www.anaconda.com/products/individual).- Create a new virtual environment and install packages. - conda create -n simpletransformers python - conda activate simpletransformers - conda install pytorch cudatoolkit=10.1 -c pytorch- Install simpletransformers. - pip install simpletransformers**NOTE** - The first two steps are necessary only if you choose to run the files on your local system. ###Code !pip install simpletransformers ###Output Collecting simpletransformers [?25l Downloading https://files.pythonhosted.org/packages/35/ef/0b70ae95138064d665d9298c4d96afba2edf4b86dc44f762807ceb12668e/simpletransformers-0.61.4-py3-none-any.whl (213kB)  |████████████████████████████████| 215kB 7.3MB/s [?25hCollecting streamlit [?25l Downloading https://files.pythonhosted.org/packages/d9/99/a8913c21bd07a14f72658a01784414ffecb380ddd0f9a127257314fea697/streamlit-0.80.0-py2.py3-none-any.whl (8.2MB)  |████████████████████████████████| 8.2MB 11.6MB/s [?25hCollecting datasets [?25l Downloading https://files.pythonhosted.org/packages/54/90/43b396481a8298c6010afb93b3c1e71d4ba6f8c10797a7da8eb005e45081/datasets-1.5.0-py3-none-any.whl (192kB)  |████████████████████████████████| 194kB 48.1MB/s [?25hCollecting tensorboardx [?25l Downloading https://files.pythonhosted.org/packages/07/84/46421bd3e0e89a92682b1a38b40efc22dafb6d8e3d947e4ceefd4a5fabc7/tensorboardX-2.2-py2.py3-none-any.whl (120kB)  |████████████████████████████████| 122kB 52.2MB/s [?25hCollecting tokenizers [?25l Downloading https://files.pythonhosted.org/packages/ae/04/5b870f26a858552025a62f1649c20d29d2672c02ff3c3fb4c688ca46467a/tokenizers-0.10.2-cp37-cp37m-manylinux2010_x86_64.whl (3.3MB)  |████████████████████████████████| 3.3MB 26.5MB/s [?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from simpletransformers) (1.19.5) Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from simpletransformers) (2.23.0) Collecting sentencepiece [?25l Downloading https://files.pythonhosted.org/packages/f5/99/e0808cb947ba10f575839c43e8fafc9cc44e4a7a2c8f79c60db48220a577/sentencepiece-0.1.95-cp37-cp37m-manylinux2014_x86_64.whl (1.2MB)  |████████████████████████████████| 1.2MB 54.7MB/s [?25hRequirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from simpletransformers) (1.1.5) Collecting seqeval [?25l Downloading https://files.pythonhosted.org/packages/9d/2d/233c79d5b4e5ab1dbf111242299153f3caddddbb691219f363ad55ce783d/seqeval-1.2.2.tar.gz (43kB)  |████████████████████████████████| 51kB 9.2MB/s [?25hCollecting tqdm>=4.47.0 [?25l Downloading https://files.pythonhosted.org/packages/72/8a/34efae5cf9924328a8f34eeb2fdaae14c011462d9f0e3fcded48e1266d1c/tqdm-4.60.0-py2.py3-none-any.whl (75kB)  |████████████████████████████████| 81kB 8.3MB/s [?25hRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from simpletransformers) (1.4.1) Collecting transformers>=4.2.0 [?25l Downloading https://files.pythonhosted.org/packages/d8/b2/57495b5309f09fa501866e225c84532d1fd89536ea62406b2181933fb418/transformers-4.5.1-py3-none-any.whl (2.1MB)  |████████████████████████████████| 2.1MB 55.7MB/s [?25hRequirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from simpletransformers) (0.22.2.post1) Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from simpletransformers) (2019.12.20) Collecting wandb [?25l Downloading https://files.pythonhosted.org/packages/d5/5d/20ab24504de2669c9a76a50c9bdaeb44a440b0e5e4b92be881ed323857b1/wandb-0.10.26-py2.py3-none-any.whl (2.1MB)  |████████████████████████████████| 2.1MB 53.8MB/s [?25hRequirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.7/dist-packages (from streamlit->simpletransformers) (7.1.2) Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.7/dist-packages (from streamlit->simpletransformers) (7.1.2) Collecting pydeck>=0.1.dev5 [?25l Downloading https://files.pythonhosted.org/packages/d6/bc/f0e44828e4290367c869591d50d3671a4d0ee94926da6cb734b7b200308c/pydeck-0.6.2-py2.py3-none-any.whl (4.2MB)  |████████████████████████████████| 4.2MB 55.9MB/s [?25hRequirement already satisfied: cachetools>=4.0 in /usr/local/lib/python3.7/dist-packages (from streamlit->simpletransformers) (4.2.1) Collecting blinker [?25l Downloading https://files.pythonhosted.org/packages/1b/51/e2a9f3b757eb802f61dc1f2b09c8c99f6eb01cf06416c0671253536517b6/blinker-1.4.tar.gz (111kB)  |████████████████████████████████| 112kB 60.7MB/s [?25hRequirement already satisfied: python-dateutil in /usr/local/lib/python3.7/dist-packages (from streamlit->simpletransformers) (2.8.1) Requirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from streamlit->simpletransformers) (20.9) Requirement already satisfied: pyarrow; python_version < "3.9" in /usr/local/lib/python3.7/dist-packages (from streamlit->simpletransformers) (3.0.0) Collecting validators Downloading https://files.pythonhosted.org/packages/db/2f/7fed3ee94ad665ad2c1de87f858f10a7785251ff75b4fd47987888d07ef1/validators-0.18.2-py3-none-any.whl Requirement already satisfied: tzlocal in /usr/local/lib/python3.7/dist-packages (from streamlit->simpletransformers) (1.5.1) Collecting base58 Downloading https://files.pythonhosted.org/packages/b8/a1/d9f565e9910c09fd325dc638765e8843a19fa696275c16cc08cf3b0a3c25/base58-2.1.0-py3-none-any.whl Collecting gitpython [?25l Downloading https://files.pythonhosted.org/packages/a6/99/98019716955ba243657daedd1de8f3a88ca1f5b75057c38e959db22fb87b/GitPython-3.1.14-py3-none-any.whl (159kB)  |████████████████████████████████| 163kB 57.9MB/s [?25hRequirement already satisfied: tornado>=5.0 in /usr/local/lib/python3.7/dist-packages (from streamlit->simpletransformers) (5.1.1) Requirement already satisfied: altair>=3.2.0 in /usr/local/lib/python3.7/dist-packages (from streamlit->simpletransformers) (4.1.0) Requirement already satisfied: toml in /usr/local/lib/python3.7/dist-packages (from streamlit->simpletransformers) (0.10.2) Collecting watchdog; platform_system != "Darwin" [?25l Downloading https://files.pythonhosted.org/packages/c6/ba/a36ca5b4e75649a002f06531862467b3eb5c768caa23d6d88b921fe238d8/watchdog-2.0.2-py3-none-manylinux2014_x86_64.whl (74kB)  |████████████████████████████████| 81kB 12.3MB/s [?25hRequirement already satisfied: astor in /usr/local/lib/python3.7/dist-packages (from streamlit->simpletransformers) (0.8.1) Requirement already satisfied: protobuf!=3.11,>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from streamlit->simpletransformers) (3.12.4) Requirement already satisfied: dill in /usr/local/lib/python3.7/dist-packages (from datasets->simpletransformers) (0.3.3) Collecting huggingface-hub<0.1.0 Downloading https://files.pythonhosted.org/packages/a1/88/7b1e45720ecf59c6c6737ff332f41c955963090a18e72acbcbeac6b25e86/huggingface_hub-0.0.8-py3-none-any.whl Collecting xxhash [?25l Downloading https://files.pythonhosted.org/packages/7d/4f/0a862cad26aa2ed7a7cd87178cbbfa824fc1383e472d63596a0d018374e7/xxhash-2.0.2-cp37-cp37m-manylinux2010_x86_64.whl (243kB)  |████████████████████████████████| 245kB 53.1MB/s [?25hRequirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from datasets->simpletransformers) (3.10.0) Requirement already satisfied: multiprocess in /usr/local/lib/python3.7/dist-packages (from datasets->simpletransformers) (0.70.11.1) Collecting fsspec [?25l Downloading https://files.pythonhosted.org/packages/62/11/f7689b996f85e45f718745c899f6747ee5edb4878cadac0a41ab146828fa/fsspec-0.9.0-py3-none-any.whl (107kB)  |████████████████████████████████| 112kB 64.6MB/s [?25hRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->simpletransformers) (2020.12.5) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->simpletransformers) (2.10) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->simpletransformers) (1.24.3) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->simpletransformers) (3.0.4) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->simpletransformers) (2018.9) Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers>=4.2.0->simpletransformers) (3.0.12) Collecting sacremoses [?25l Downloading https://files.pythonhosted.org/packages/08/cd/342e584ee544d044fb573ae697404ce22ede086c9e87ce5960772084cad0/sacremoses-0.0.44.tar.gz (862kB)  |████████████████████████████████| 870kB 53.1MB/s [?25hRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->simpletransformers) (1.0.1) Collecting subprocess32>=3.5.3 [?25l Downloading https://files.pythonhosted.org/packages/32/c8/564be4d12629b912ea431f1a50eb8b3b9d00f1a0b1ceff17f266be190007/subprocess32-3.5.4.tar.gz (97kB)  |████████████████████████████████| 102kB 13.7MB/s [?25hCollecting docker-pycreds>=0.4.0 Downloading https://files.pythonhosted.org/packages/f5/e8/f6bd1eee09314e7e6dee49cbe2c5e22314ccdb38db16c9fc72d2fa80d054/docker_pycreds-0.4.0-py2.py3-none-any.whl Requirement already satisfied: psutil>=5.0.0 in /usr/local/lib/python3.7/dist-packages (from wandb->simpletransformers) (5.4.8) Collecting sentry-sdk>=0.4.0 [?25l Downloading https://files.pythonhosted.org/packages/f3/92/5a33be64990ba815364a8f2dd9e6f51de60d23dfddafb4f1fc5577d4dc64/sentry_sdk-1.0.0-py2.py3-none-any.whl (131kB)  |████████████████████████████████| 133kB 56.5MB/s [?25hRequirement already satisfied: six>=1.13.0 in /usr/local/lib/python3.7/dist-packages (from wandb->simpletransformers) (1.15.0) Collecting pathtools Downloading https://files.pythonhosted.org/packages/e7/7f/470d6fcdf23f9f3518f6b0b76be9df16dcc8630ad409947f8be2eb0ed13a/pathtools-0.1.2.tar.gz Collecting configparser>=3.8.1 Downloading https://files.pythonhosted.org/packages/fd/01/ff260a18caaf4457eb028c96eeb405c4a230ca06c8ec9c1379f813caa52e/configparser-5.0.2-py3-none-any.whl Collecting shortuuid>=0.5.0 Downloading https://files.pythonhosted.org/packages/25/a6/2ecc1daa6a304e7f1b216f0896b26156b78e7c38e1211e9b798b4716c53d/shortuuid-1.0.1-py3-none-any.whl Requirement already satisfied: PyYAML in /usr/local/lib/python3.7/dist-packages (from wandb->simpletransformers) (3.13) Requirement already satisfied: promise<3,>=2.0 in /usr/local/lib/python3.7/dist-packages (from wandb->simpletransformers) (2.3) Requirement already satisfied: ipywidgets>=7.0.0 in /usr/local/lib/python3.7/dist-packages (from pydeck>=0.1.dev5->streamlit->simpletransformers) (7.6.3) Collecting ipykernel>=5.1.2; python_version >= "3.4" [?25l Downloading https://files.pythonhosted.org/packages/3a/7d/9f8ac1b1b76f2f1538b5650f0b5636bae082724b1e06939a3a9d38e1380e/ipykernel-5.5.3-py3-none-any.whl (120kB)  |████████████████████████████████| 122kB 46.2MB/s [?25hRequirement already satisfied: jinja2>=2.10.1 in /usr/local/lib/python3.7/dist-packages (from pydeck>=0.1.dev5->streamlit->simpletransformers) (2.11.3) Requirement already satisfied: traitlets>=4.3.2 in /usr/local/lib/python3.7/dist-packages (from pydeck>=0.1.dev5->streamlit->simpletransformers) (5.0.5) Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->streamlit->simpletransformers) (2.4.7) Requirement already satisfied: decorator>=3.4.0 in /usr/local/lib/python3.7/dist-packages (from validators->streamlit->simpletransformers) (4.4.2) Collecting gitdb<5,>=4.0.1 [?25l Downloading https://files.pythonhosted.org/packages/ea/e8/f414d1a4f0bbc668ed441f74f44c116d9816833a48bf81d22b697090dba8/gitdb-4.0.7-py3-none-any.whl (63kB)  |████████████████████████████████| 71kB 11.1MB/s [?25hRequirement already satisfied: toolz in /usr/local/lib/python3.7/dist-packages (from altair>=3.2.0->streamlit->simpletransformers) (0.11.1) Requirement already satisfied: jsonschema in /usr/local/lib/python3.7/dist-packages (from altair>=3.2.0->streamlit->simpletransformers) (2.6.0) Requirement already satisfied: entrypoints in /usr/local/lib/python3.7/dist-packages (from altair>=3.2.0->streamlit->simpletransformers) (0.3) Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from protobuf!=3.11,>=3.6.0->streamlit->simpletransformers) (54.2.0) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->datasets->simpletransformers) (3.4.1) Requirement already satisfied: typing-extensions>=3.6.4; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->datasets->simpletransformers) (3.7.4.3) Requirement already satisfied: ipython>=4.0.0; python_version >= "3.3" in /usr/local/lib/python3.7/dist-packages (from ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (5.5.0) Requirement already satisfied: jupyterlab-widgets>=1.0.0; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (1.0.0) Requirement already satisfied: nbformat>=4.2.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (5.1.3) Requirement already satisfied: widgetsnbextension~=3.5.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (3.5.1) Requirement already satisfied: jupyter-client in /usr/local/lib/python3.7/dist-packages (from ipykernel>=5.1.2; python_version >= "3.4"->pydeck>=0.1.dev5->streamlit->simpletransformers) (5.3.5) Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2>=2.10.1->pydeck>=0.1.dev5->streamlit->simpletransformers) (1.1.1) Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.7/dist-packages (from traitlets>=4.3.2->pydeck>=0.1.dev5->streamlit->simpletransformers) (0.2.0) Collecting smmap<5,>=3.0.1 Downloading https://files.pythonhosted.org/packages/68/ee/d540eb5e5996eb81c26ceffac6ee49041d473bc5125f2aa995cf51ec1cf1/smmap-4.0.0-py2.py3-none-any.whl Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (0.7.5) Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (2.6.1) Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (0.8.1) Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (4.8.0) Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (1.0.18) Requirement already satisfied: jupyter-core in /usr/local/lib/python3.7/dist-packages (from nbformat>=4.2.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (4.7.1) Requirement already satisfied: notebook>=4.4.1 in /usr/local/lib/python3.7/dist-packages (from widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (5.3.1) Requirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.7/dist-packages (from jupyter-client->ipykernel>=5.1.2; python_version >= "3.4"->pydeck>=0.1.dev5->streamlit->simpletransformers) (22.0.3) Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect; sys_platform != "win32"->ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (0.7.0) Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (0.2.5) Requirement already satisfied: terminado>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (0.9.4) Requirement already satisfied: nbconvert in /usr/local/lib/python3.7/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (5.6.1) Requirement already satisfied: Send2Trash in /usr/local/lib/python3.7/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (1.5.0) Requirement already satisfied: bleach in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (3.3.0) Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (1.4.3) Requirement already satisfied: testpath in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (0.4.4) Requirement already satisfied: defusedxml in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (0.7.1) Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (0.8.4) Requirement already satisfied: webencodings in /usr/local/lib/python3.7/dist-packages (from bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit->simpletransformers) (0.5.1) Building wheels for collected packages: seqeval, blinker, sacremoses, subprocess32, pathtools Building wheel for seqeval (setup.py) ... [?25l[?25hdone Created wheel for seqeval: filename=seqeval-1.2.2-cp37-none-any.whl size=16172 sha256=88418e2fcd82cb64576e690d9a32f0063731bb3aa6d8a4cf49343106331b3ba1 Stored in directory: /root/.cache/pip/wheels/52/df/1b/45d75646c37428f7e626214704a0e35bd3cfc32eda37e59e5f Building wheel for blinker (setup.py) ... [?25l[?25hdone Created wheel for blinker: filename=blinker-1.4-cp37-none-any.whl size=13448 sha256=4b7684b630a3b2be3459d4078abebc4d294e8586013cb9be54da2f5192aeece6 Stored in directory: /root/.cache/pip/wheels/92/a0/00/8690a57883956a301d91cf4ec999cc0b258b01e3f548f86e89 Building wheel for sacremoses (setup.py) ... [?25l[?25hdone Created wheel for sacremoses: filename=sacremoses-0.0.44-cp37-none-any.whl size=886084 sha256=0d0dfddb543242f7cc67997a2bd4dd1c0e7ed99b64f0dc775ab7684907bf67ad Stored in directory: /root/.cache/pip/wheels/3e/fb/c0/13ab4d63d537658f448366744654323077c4d90069b6512f3c Building wheel for subprocess32 (setup.py) ... [?25l[?25hdone Created wheel for subprocess32: filename=subprocess32-3.5.4-cp37-none-any.whl size=6489 sha256=6613468beca27c79a938b2992d9207d499e9b03421f88f0a5a181ec509f136d8 Stored in directory: /root/.cache/pip/wheels/68/39/1a/5e402bdfdf004af1786c8b853fd92f8c4a04f22aad179654d1 Building wheel for pathtools (setup.py) ... [?25l[?25hdone Created wheel for pathtools: filename=pathtools-0.1.2-cp37-none-any.whl size=8786 sha256=31977ee2435be2a52e512e77a31b488de9cb08bdefda72fb4ff4c8485e3465d5 Stored in directory: /root/.cache/pip/wheels/0b/04/79/c3b0c3a0266a3cb4376da31e5bfe8bba0c489246968a68e843 Successfully built seqeval blinker sacremoses subprocess32 pathtools ERROR: google-colab 1.0.0 has requirement ipykernel~=4.10, but you'll have ipykernel 5.5.3 which is incompatible. ERROR: datasets 1.5.0 has requirement tqdm<4.50.0,>=4.27, but you'll have tqdm 4.60.0 which is incompatible. Installing collected packages: ipykernel, pydeck, blinker, validators, base58, smmap, gitdb, gitpython, watchdog, streamlit, tqdm, huggingface-hub, xxhash, fsspec, datasets, tensorboardx, tokenizers, sentencepiece, seqeval, sacremoses, transformers, subprocess32, docker-pycreds, sentry-sdk, pathtools, configparser, shortuuid, wandb, simpletransformers Found existing installation: ipykernel 4.10.1 Uninstalling ipykernel-4.10.1: Successfully uninstalled ipykernel-4.10.1 Found existing installation: tqdm 4.41.1 Uninstalling tqdm-4.41.1: Successfully uninstalled tqdm-4.41.1 Successfully installed base58-2.1.0 blinker-1.4 configparser-5.0.2 datasets-1.5.0 docker-pycreds-0.4.0 fsspec-0.9.0 gitdb-4.0.7 gitpython-3.1.14 huggingface-hub-0.0.8 ipykernel-5.5.3 pathtools-0.1.2 pydeck-0.6.2 sacremoses-0.0.44 sentencepiece-0.1.95 sentry-sdk-1.0.0 seqeval-1.2.2 shortuuid-1.0.1 simpletransformers-0.61.4 smmap-4.0.0 streamlit-0.80.0 subprocess32-3.5.4 tensorboardx-2.2 tokenizers-0.10.2 tqdm-4.60.0 transformers-4.5.1 validators-0.18.2 wandb-0.10.26 watchdog-2.0.2 xxhash-2.0.2 ###Markdown 4. Training The T5 Model (t5-small)Some important model arguments are -- `max_seq_length`: Chosen such that most samples are not truncated. Increasing the sequence length significantly affects the memory consumption of the model, so it’s usually best to keep it as short as possible.- `evaluate_during_training`: We’ll periodically test the model against the test data to see how it’s learning.- `evaluate_during_training_steps`: The aforementioned period at which the model is tested.- `evaluate_during_training_verbose`: Show us the results when a test is done.- `fp16`: FP16 or mixed-precision training reduces the memory consumption of training the models (meaning larger batch sizes can be trained effectively).- `save_eval_checkpoints`: By default, a model checkpoint will be saved when an evaluation is performed during training. - `reprocess_input_data`: Controls whether the features are loaded from cache (saved to disk) or whether tokenization is done again on the input sequences. It only really matters when doing multiple runs.- `overwrite_output_dir`: This will overwrite any previously saved models if they are in the same output directory.- `wandb_project`: Used for visualization of training progress. When run, a session link is created where all the necessary plots are shown in a dashboard. ###Code import pandas as pd from simpletransformers.t5 import T5Model train_df = pd.read_csv("data/train.tsv", sep="\t").astype(str) eval_df = pd.read_csv("data/eval.tsv", sep="\t").astype(str) model_args = { "max_seq_length": 196, "train_batch_size": 16, "eval_batch_size": 64, "num_train_epochs": 1, "evaluate_during_training": True, "evaluate_during_training_steps": 15000, "evaluate_during_training_verbose": True, "use_multiprocessing": False, "fp16": False, "save_steps": -1, "save_eval_checkpoints": True, "save_model_every_epoch": False, "reprocess_input_data": True, "overwrite_output_dir": True, "wandb_project": "T5 - Multi-Label", } model = T5Model("t5", "t5-small", args=model_args) model.train_model(train_df, eval_data=eval_df) ###Output _____no_output_____ ###Markdown 5. Testing The ModelTo test the model, we use the prescribed metrics of a weighted F1-Score, Precision and Accuracy. The results are evaluated using the sklearn.metrics library which provides efficient implementation of F1, Precision and Recall calculation. The model finetuned through this experiment can be found in the outputs folder of the repository in the folder titled "best_model". ###Code import json from datetime import datetime from pprint import pprint from statistics import mean import numpy as np import pandas as pd from scipy.stats import pearsonr, spearmanr from simpletransformers.t5 import T5Model from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score from transformers.data.metrics.squad_metrics import compute_exact, compute_f1 def f1(truths, preds): return mean([compute_f1(truth, pred) for truth, pred in zip(truths, preds)]) def exact(truths, preds): return mean([compute_exact(truth, pred) for truth, pred in zip(truths, preds)]) def precision(truths, preds): return mean([compute_precision_score(truth, pred) for truth, pred in zip(truths, preds)]) model_args = { "overwrite_output_dir": True, "max_seq_length": 196, "eval_batch_size": 32, "num_train_epochs": 1, "use_multiprocessing": False, "num_beams": None, "do_sample": True, "max_length": 50, "top_k": 50, "top_p": 0.95, "num_return_sequences": 3, } # Load the trained model model = T5Model("t5", "outputs/best_model", args=model_args) # Load the evaluation data df = pd.read_csv("data/eval.tsv", sep="\t").astype(str) # Prepare the data for testing to_predict = [ prefix + ": " + str(input_text) for prefix, input_text in zip(df["prefix"].tolist(), df["input_text"].tolist()) ] truth = df["target_text"].tolist() tasks = df["prefix"].tolist() # Get the model predictions preds = model.predict(to_predict) # Saving the predictions if needed with open(f"predictions/predictions_{datetime.now()}.txt", "w") as f: for i, text in enumerate(df["input_text"].tolist()): f.write(str(text) + "\n\n") f.write("Truth:\n") f.write(truth[i] + "\n\n") f.write("Prediction:\n") for pred in preds[i]: f.write(str(pred) + "\n") f.write( "________________________________________________________________________________\n" ) # Taking only the first prediction preds = [pred[0] for pred in preds] df["predicted"] = preds # Evaluating the tasks separately output_dict = { "multilabel classification": {"truth": [], "preds": [],} } results_dict = {} for task, truth_value, pred in zip(tasks, truth, preds): output_dict[task]["truth"].append(truth_value) output_dict[task]["preds"].append(pred) print("-----------------------------------") print("Results: ") for task, outputs in output_dict.items(): if task == "multilabel classification": try: task_truth = output_dict[task]["truth"] task_preds = output_dict[task]["preds"] results_dict[task] = { "F1 Score": f1_score(truth,preds,average='weighted'), "Exact matches": exact(task_truth, task_preds), "Precision": precision_score(truth,preds,average='weighted'), "Recall": recall_score(truth,preds,average='weighted'), } print(f"Scores for {task}:") print(f"F1 score: {f1(task_truth, task_preds)}") print(f"Exact matches: {exact(task_truth, task_preds)}") print(f"Precision: {precision_score(truth,preds,average='weighted')}") print(f"Recall: {recall_score(truth,preds,average='weighted')}") print() except: pass #Saving the Output to a File with open(f"results/result.json", "w") as f: json.dump(results_dict, f) ###Output _____no_output_____
python/02-functions-and-getting-help.ipynb
###Markdown **[Python Micro-Course Home Page](https://www.kaggle.com/learn/python)**--- These exercises accompany the tutorial on [functions and getting help](https://www.kaggle.com/colinmorris/functions-and-getting-help).As before, don't forget to run the setup code below before jumping into question 1. ###Code # SETUP. You don't need to worry for now about what this code does or how it works. from learntools.core import binder; binder.bind(globals()) from learntools.python.ex2 import * print('Setup complete.') ###Output Setup complete. ###Markdown Exercises 1.Complete the body of the following function according to its docstring.HINT: Python has a builtin function `round` ###Code def round_to_two_places(num): """Return the given number rounded to two decimal places. >>> round_to_two_places(3.14159) 3.14 """ # Replace this body with your own code. # ("pass" is a keyword that does literally nothing. We used it as a placeholder # because after we begin a code block, Python requires at least one line of code) return round(num, 2) q1.check() # Uncomment the following for a hint # q1.hint() # Or uncomment the following to peek at the solution # q1.solution() ###Output _____no_output_____ ###Markdown 2.The help for `round` says that `ndigits` (the second argument) may be negative.What do you think will happen when it is? Try some examples in the following cell?Can you think of a case where this would be useful? ###Code # Put your test code here print(round(123.456, ndigits=-1)) print(round(123456.789, ndigits=-3)) # q2.check() q2.solution() ###Output _____no_output_____ ###Markdown 3.In a previous programming problem, the candy-sharing friends Alice, Bob and Carol tried to split candies evenly. For the sake of their friendship, any candies left over would be smashed. For example, if they collectively bring home 91 candies, they'll take 30 each and smash 1.Below is a simple function that will calculate the number of candies to smash for *any* number of total candies.Modify it so that it optionally takes a second argument representing the number of friends the candies are being split between. If no second argument is provided, it should assume 3 friends, as before.Update the docstring to reflect this new behaviour. ###Code def to_smash(total_candies, number_of_friends = 3): """Return the number of leftover candies that must be smashed after distributing the given number of candies evenly between number of friends. >>> to_smash(91) 1 >>> to_smash(102, 5) 2 """ return total_candies % number_of_friends q3.check() # q3.hint() # q3.solution() ###Output _____no_output_____ ###Markdown 4.It may not be fun, but reading and understanding error messages will be an important part of your Python career.Each code cell below contains some commented-out buggy code. For each cell...1. Read the code and predict what you think will happen when it's run.2. Then uncomment the code and run it to see what happens. (**Tip**: In the kernel editor, you can highlight several lines and press `ctrl`+`/` to toggle commenting.)3. Fix the code (so that it accomplishes its intended purpose without throwing an exception) ###Code round_to_two_places(9.9999) x = -10 y = 5 # Which of the two variables above has the smallest absolute value? smallest_abs = min(abs(x), abs(y)) smallest_abs def f(x): y = abs(x) return y print(f(-5)) ###Output 5
HighA/jupyter-notebook/class6.ipynb
###Markdown Binary tree sorting ###Code class CLS_node(): """ A Node in binary tree """ def __init__(self,v,l,r): self.value = v self.left = l self.right = r def __str__(self): if self.left == None and self.right == None: return str(self.value) elif self.left == None: return str(self.value)+str(self.right) elif self.right == None: return str(self.left)+str(self.value) else: return str(self.left)+str(self.value)+str(self.right) class CLS_tree(): """ The binary tree """ def __init__(self,r): self.root = r self.widLst = [0 for _ in range(self.dep(self.root))] def sortInsert(self,node,start): # print(start) if node.value>start.value: if start.right==None: start.right = node else: self.sortInsert(node,start.right) else: if start.left==None: start.left = node else: self.sortInsert(node,start.left) # print(root) return def dep(self,start,dept=1): if start == None: return dept ld = self.dep(start.left,dept+1) rd = self.dep(start.right,dept+1) if ld>rd: return ld else: return rd def wid(self,start,dept=0): # print(dept) if(dept==0): self.widLst = [0 for _ in range(self.dep(self.root))] if start == None: return self.widLst[dept]+=1 ld = self.wid(start.left,dept+1) rd = self.wid(start.right,dept+1) return max(self.widLst) def __repr__(self): return str(root) # Test root = CLS_node('o',None,None) tree = CLS_tree(root) strst = set('tzlhpfybd') for s in strst: node = CLS_node(s,None,None) tree.sortInsert(node,tree.root) # tree.sortInsert(node) print(tree) print(tree.dep(tree.root)) # print(tree.widLst) print(tree.wid(tree.root)) max(tree.widLst) ###Output _____no_output_____
_notebooks/2022-01-20-biasAndVariance.ipynb
###Markdown Beginner Mistakes in Machine Learning> "The mistakes you're likely to make early in your machine learning career."- toc: false- branch: master- badges: true- permalink: /beginner-mistakes/- comments: false- hide: false- categories: [Beginner] So you've decided to pick up machine learning. That's fantastic! It can be incredibly powerful, and open up a ton of opportunities. Before you get started, we should have a little talk about some of the most common mistakes that beginners make when learning machine learning. The biggest culprit of all: overfitting. What is overfitting?Using mathematical models to estimate different parameters and properties is nothing new. The concept is basically as old as math itself, and when used correctly it can be incredibly powerful (not to mention [sexy](https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century).😉) That has created a huge influx of people that want to learn how to get into the field of machine learning and become data scientists themselves. One of the biggest roadblocks that people tend to run into when they are first learning how to use machine learning models is to make a model as complex as possible, and fit their training data to within an inch of its life. This is called overfitting, and it occurs when a model is fitted so well to a particular subset of data that it doesn't work on any other data at all. There are specific training and testing protocols for avoiding this, such as [K-Fold Cross Validation](https://machinelearningmastery.com/k-fold-cross-validation/), and I will explore how that works in another post. Here we'll focus on how overfitting occurs, and what it has to do with the bias and variance of a model. Bias and varianceBias actually describes any systematic error that can be introduced when fitting a model to a dataset. Common sources of bias include:- **Model Bias.** Bias introduced by choosing a model that is ill fit for the application at hand. You'll never be able to fit data well if the model you've chosen is simply wrong for what you're doing. This can be eliminated through thoughtful evaluation of the model you intend to use, and by evaluating multiple models.- **Measurement Bias.** This bias is introduced as the raw data is collected. It can be because of a faulty sensor, it can be because someone read a thermometer wrong, etc. Measurement bias can be difficult to eliminate entirely, but through careful experimental and equipment setup it can be minimized.- **Sampling Bias.** This is what happens when the data sample that has been used to train the model isn't representative of the actual data typically observed for the system. To avoid this, we can train and validate on multiple data samples collected under various conditions to ensure we have a broad enough training data set.That doesn't even begin to cover all the ways that bias can creep into your model, but it gives you an idea as to the kind of things you should be looking out for.Variance is sort of the yin to bias' yang. Where the bias of a system is caused by inadvertently fitting the model to unreal circumstances, variance is caused by entirely real fluctuations within a dataset. Our model of choice can end up being fit to the noise in the dataset, resulting in a model that can't really predict anything.Bias and variance can both result in a model being a poor predictor, but it is impossible to eliminate either entirely. In fact, variance can be helpful in reducing bias by introducing random variation to the training data. At the same time, bias can be useful in reducing variance because it can enable the model to ignore the noise in the dataset. The relationship between bias and variance is a balancing act, and its important to getting any use out of a machine learning model. How does model complexity tie in?The complexity of a model is directly tied to the model bias discussed above, and we can illustrate that here. I'll be using the monthly sunspot dataset from [Jason Brownlee's Github](https://github.com/jbrownlee/Datasets). Below I import the data, then render a table and plot to show what the data looks like. Note that the dates have been converted to [Unix epoch time](https://en.wikipedia.org/wiki/Unix_time) for the sake of simplicity. ###Code '''First we import all the packages we'll be using''' import pandas as pd import numpy as np import datetime as dt from scipy.optimize import curve_fit import plotly.express as px # for visualization import plotly.graph_objs as go from plotly.figure_factory import create_table '''Here we import our data, and take a chunk of it for use in our analysis''' rawDataDF = pd.read_csv("monthly-sunspots.txt") rawDataDF["Epoch"] = (pd.to_datetime(rawDataDF['Month']) - dt.datetime(1970,1,1)).dt.total_seconds() df = rawDataDF.iloc[:151,:] table = create_table(rawDataDF.head()) table.show() fig = px.scatter(df, x='Epoch', y='Sunspots') fig.show() ###Output _____no_output_____ ###Markdown We can use scipy to generate a really simple linear model for the data. This is a pretty poor fit for the data sample, but we don't expect that much from a linear model. It doesn't have enough complexity to capture the actual shape of the data. ###Code '''This function is for use with scipy's curve_fit, seen below''' def func(x,b,m): return m*x + b '''We perform the fit, and store the result in our dataframe alongside the raw data.''' popt, pcov = curve_fit(func, df['Epoch'],df['Sunspots']) df['simpleFit'] = df['Epoch'].apply(lambda x: func(x,popt[0],popt[1])) fig.add_scatter(x=df['Epoch'], y=df['simpleFit'], mode='lines') fig.show() ###Output _____no_output_____ ###Markdown By adding another term to this equation, making it a quadratic, we can get a slightly better fit. ###Code '''This code cell is similar to the above one, with a slightly more complex fit.''' def func(x,b,m,a): return a*(x**2) + m*x + b popt, pcov = curve_fit(func, df['Epoch'],df['Sunspots']) df['simpleFit'] = df['Epoch'].apply(lambda x: func(x,popt[0],popt[1],popt[2])) fig.add_scatter(x=df['Epoch'], y=df['simpleFit'], mode='lines') fig.show() ###Output _____no_output_____ ###Markdown In fact, according to [Taylor's Theorem](https://en.wikipedia.org/wiki/Taylor%27s_theorem), it should be possible to get a very good estimation of this data by adding more terms. Below, you can see a plot with a slider that allows you to explore how an increasing number of parameters offer a better fit to the shown data. ###Code #collapse '''This section contains code that dynamically generates functions with a given number of parameters, and fits them using scipy. You can take a look if you want, but understanding it isn't necessary for this discussion.''' def funcBuilder(numParams): result = ["x"] count = 0 for i in range(numParams): count = count + 1 result.append(",a"+str(i)) funcStr = list("def func(") funcStr.extend(result) funcStr.extend("):\n") funcStr.extend(" result = 0") count = 0 for i in range(0,numParams): funcStr.extend("+ (x "+ "**" + str(i) + ")" + " * a" + str(i) ) funcStr.extend("\n return result") funcStr = "".join(funcStr) return funcStr poptList = [] popt = [] for numParams in range(1,15,1): exec(funcBuilder(numParams)) popt, pcov = curve_fit(func, df['Epoch'],df['Sunspots'], p0 = np.append(popt,1)) poptList.append(popt) df['fit'+str(numParams)] = df['Epoch'].apply(lambda x: func(x, *popt)) fig = px.scatter(df, x='Epoch', y='Sunspots') fitCols = [x for x in df.columns if "fit" in x] steps = [] for col in fitCols: fig.add_trace( go.Scatter( visible=False, x=df["Epoch"], y=df[col] ) ) fig.data[0].visible = True for i in range(len(fig.data)): numParams = dict( method="update", args=[{"visible": [False] * len(fig.data), "showlegend":False}], # layout attribute label=str(i) ) numParams["args"][0]["visible"][0] = True numParams["args"][0]["visible"][i] = True # Toggle i'th trace to "visible" steps.append(numParams) sliders = [dict( active=0, currentvalue={"prefix": "Number of terms: "}, pad={"t": 50}, steps=steps )] fig.layout.sliders = sliders fig.show() ###Output _____no_output_____ ###Markdown This next piece of code calculates the [Mean Absolute Percent Error (MAPE)](https://en.wikipedia.org/wiki/Mean_absolute_percentage_error) for the fits. A lower value here represents a better fit. This shows that, despite increasing the complexity of the model, four parameters offers the best fit for the data. ###Code '''We get all the columns with "fit" in the title and use them to calculate the MAPE for our fits.''' fitCols = [x for x in df.columns if "fit" in x] dfAPE = pd.DataFrame() dfMAPE = [] for col in fitCols: dfAPE[col+"AbsErr"] = df.apply(lambda x: 0 if x["Sunspots"] == 0.0 else abs(x[col] - x["Sunspots"])/x["Sunspots"],axis=1) dfMAPE.append([int(col.split("t")[-1]),dfAPE[col+"AbsErr"].iloc[-1]/len(dfAPE[col+"AbsErr"])]) dfMAPE1 = pd.DataFrame(dfMAPE, columns=["numParams","MAPE"]) fig = px.scatter(dfMAPE1, x='numParams', y='MAPE') fig.show() ###Output _____no_output_____ ###Markdown Those results are actually kind of misleading though. In the plot above, even the poor fits still have a percent error less than one, but let's see what happens when we explore another subset of the data. ###Code '''Here we grab the next 150 points of data and plot them.''' df = rawDataDF.iloc[150:301,:] fig = px.scatter(df, x='Epoch', y='Sunspots') fig.show() ###Output _____no_output_____ ###Markdown Here we will plot our previous model fits using our new data sample. Explore how adding more parameters affects the fit of this data. ###Code #collapse '''This is another chunk of code that is sort of complex, and not strictly necessary for understanding the larger point.''' p0 = [] popt = [] for numParams in range(1,15,1): exec(funcBuilder(numParams)) df['fit'+str(numParams)] = df['Epoch'].apply(lambda x: func(x, *poptList[numParams-1])) fig = px.scatter(df, x='Epoch', y='Sunspots') fitCols = [x for x in df.columns if "fit" in x] steps = [] for col in fitCols: fig.add_trace( go.Scatter( visible=False, x=df["Epoch"], y=df[col] ) ) fig.data[0].visible = True for i in range(len(fig.data)): numParams = dict( method="update", args=[{"visible": [False] * len(fig.data), "showlegend":False}], # layout attribute label=str(i) ) numParams["args"][0]["visible"][0] = True numParams["args"][0]["visible"][i] = True # Toggle i'th trace to "visible" steps.append(numParams) sliders = [dict( active=0, currentvalue={"prefix": "Number of terms: "}, pad={"t": 50}, steps=steps )] fig.layout.sliders = sliders fig.show() ###Output _____no_output_____ ###Markdown These fits are terrible! What does the MAPE look like? ###Code '''Calculating the MAPE the same way we did previously.''' fitCols = [x for x in df.columns if "fit" in x] dfAPE = pd.DataFrame() dfMAPE = [] for col in fitCols: dfAPE[col+"AbsErr"] = df.apply(lambda x: 0 if x["Sunspots"] == 0.0 else abs(x[col] - x["Sunspots"])/x["Sunspots"],axis=1) dfMAPE.append([int(col.split("t")[-1]),dfAPE[col+"AbsErr"].iloc[-1]/len(dfAPE[col+"AbsErr"])]) dfMAPE2 = pd.DataFrame(dfMAPE, columns=["numParams","MAPE"]) fig = px.scatter(dfMAPE2, x='numParams', y='MAPE') fig.show() ###Output _____no_output_____ ###Markdown Notice the magnitude of the MAPE in the above plot. This is far worse than the fits on that first data sample. Let's overlay our MAPEs for a direct comparison. ###Code '''Overlaying the MAPE plots for easy comparison. The y-axis had to be a log plot in order for them both to appear on the same plot. You know things have gotten bad when...''' fig = px.line(log_y=True,) fig.add_trace(go.Scatter(x=dfMAPE1["numParams"],y=dfMAPE1["MAPE"], legendgroup="MAPE1",name="MAPE1")) fig.add_trace(go.Scatter(x=dfMAPE2["numParams"],y=dfMAPE2["MAPE"], legendgroup="MAPE2",name="MAPE2")) fig.data[0]["showlegend"] = True fig.update_layout( xaxis_title="numParams", yaxis_title="MAPE" ) fig.show() ###Output _____no_output_____
Pill11_Intro_Prob/pill11_Introduction_Probabilistic_Models_student.ipynb
###Markdown (c) October 2016 - This notebook was created by [Oriol Pujol Vila](http://www.maia.ub.es/~oriol). Probabilistical models Parametric versus non parametricDoes the model have a finite set of parameteres or do the parameters grow with the amount of data? The kind of models that behave as the first type are called **parametric models**. On the contrary, those models with a number of parameters depending on the size of the data set are called **non-parametric models**. Parametric models are faster to use but make stronger assumptions about the nature of the data distributions. Non-parametric models are more flexible but may be computationally intractable when the amount of data is large. A simple non-parametric modelA simple example of a non-parametric classifier is the K nearest neighbor. This simple model looks for the most similar K points in the training set and counts the amount of elements for each class and returns the empirical fraction as teh estimate,$$\mathbb{P}(y=c|x,D,K) = \frac{1}{K}\sum_{i\in N_K(x,D)} \mathbb{1}(y_i=c)$$where $N_K(x,D)$ are the inidces of the $K$ nearest points to $x$ in $D$ and $\mathbb{1}(c)$ is the indicator funtion. This is an example of **memory-based** or **instance-based learning**. Probabilistic modelsProbabilistic models are based on the modeling de distribution of data $p(x)$. In this sense they are a natural choice for unsupervised data analysis. As we will see later the key concept in probabilistic models is going to be the factorization of the joint probability density function by assuming or explicitly showing independences.In the supervised setting we are looking for modeling the conditional distribution, $p(y|x)$. This is the probability of the labels having observed a certain sample. In this setting the joint probability of labels and data $p(x,y)$ is not necessary and we will be using Bayes Rule, that states the following,$$p(Y=c|X=x_a) = \frac{p(X=x_a|Y=c)p(Y=c)}{p(X=x_a)} = \frac{p(X=x_a|Y=c)p(Y=c)}{\int p(X=x|Y=y)p(Y=y) dy}$$A classifier that takes into account the former rule is called **generative classifier** since it specifies how to generate data using **class conditional density or likelihood** $p(X=x_a|Y=c)$ and the **class prior**, $p(Y=c)$.Observe that the likelihood $p(X=x_a|Y=c)$, of $X=x_a$ does not necessarily adds to one when considered accross the all the models, i.e. $\sum\limits_{i=0}^{|Y|} p(X=x_a|Y=c_i) \neq 1$ because $p(X=x_a|Y=c_i)$ accounts for the probability the model for class $c_i$ assigns to example $x_a$. It may happen that all models have very small probability value for that sample and not necessarily add to one. Note however that given a certain model for class $c_i$, i.e. $p(X=x|Y=c_i)$ then $\sum\limits_{k=1}^{N} p(X=x_k|Y=c_i) = 1$An alternative probabilistic approach to generative classifiers are **discriminative classifiers** that directly fit the **class posterior** $p(y = c|x)$. Generative models for discrete data Let us start our discusion on probabilistic models Recall that by applying Bayes rule we have that the probability of a sample to belong to one class is $$p(y=c|x,\theta) \propto p(x|y = c,\theta)p(y = c|\theta)$$The key to using these models is specifying a suitable form for the class-conditional density $p(x|y = c,\theta)$ which defines what kind of data we expect to see in each class according to some parameters $\theta$ governing the model. In this context the **likelihood** accounts for the probability of generating that particular data assuming it belongs to a certain model or class. The **prior** is a subjective belief of probable is this model. **EXERCISE: **Following Murphy's example, suppose that we are hypothesizing about a number generating function. Up to this point we don't have much information. We can, however, hypothesize about the process generating the values. We could think about many different hypothesis, such as `number generator`, `even number generator`, `odd number generator` , `numbers ended with 9`, `even numbers except 32`, `powers of 2`. With this information however we can check the different concepts involved.For example:What is the likelihood for all these hypotheses?What is the prior for these hypotheses?How is our prior belief after seeing example $\{16\}$?and after seeing examples $\{16, 64, 8\}$? After applying the Bayesian rule we obtain a probability density function on the hypotheses space. If we have to report one we could proceed in different ways. We can report the **Maximum A Posteriori (MAP)**, which corresponds to the mode of the posterior and can be written as, $$\hat{h}^{MAP} = \arg\max_h p(D|h)p(h) = \arg\max_h [\log(p(D|h)) + \log(p(h))]$$Usually the likelihood term depends on the number of samples. Then, as we get more data the MAP converges towards the **Maximum Likelihood Estimate (MLE)**$$\hat{h}^{MLE} = \arg\max_h p(D|h) = \arg\max_h \log(p(D|h))$$ The Beta-Binomial modelSuppose a random variable $X_i\sim \text{Ber}(\theta)$, where $X_i = 1$ represents "heads", $X_i = 0$ represents "tails", and $\theta\in [0,1]$ is the rate parameter (probability of heads). If data are iid, the likelihood has the form$$p(D|\theta) = \theta^{N_1}(1-\theta)^{N_2}$$where $N_1$, and $N_2$ are the heads and tails counts. What we are working out in this exercise works also for Binomial distributions, i.e. the probability of observing $N_1$ heads in $N$ tosses$$\text{Bin}(k|N,D) = \big(\begin{align}N\\k \end{align}\big) \theta^k(1-\theta)^{N-k}$$.We need a prior that has support over the interval $[0,1]$. We could use the same form as the likelihood. The prior could look like,$$p(\theta) \propto \theta^{\gamma_1}(1-\theta)^{\gamma_2}$$When the prior and the posterior have the same form, such as in this case, we say the prior is a **conjugate prior** for the corresponding likelihood. In the case of Bernoulli or Binomial, the conjugate prior is the beta distribution, that has the following form$$\text{Beta}(\theta|a,b) \propto \theta^{a-1}(1-\theta)^{b-1}$$The parameters of the prior are called **hyperparameters** and we can set them to encode prior belief. ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt theta = np.linspace(0,1,30) a=2 b=2 beta = theta**(a-1)*(1-theta)**(b-1) prior= beta/np.sum(beta) plt.plot(theta,prior) plt.gca().set_ylim([0,1]) ###Output _____no_output_____ ###Markdown If we multipy the likelihood by the beta prior we get the following posterior. Suppose that we observe one heads. Then the likehood is ###Code likelihood = theta plt.plot(theta,likelihood) ###Output _____no_output_____ ###Markdown Observe that the likelihood does not necessarily is a probability density function and does not need to sum up to one when considered over all potential parameters, i.e. $\int p(X=\text{'heads'}|\theta) d\theta$. ###Code posterior = likelihood*prior posterior = posterior/np.sum(posterior) plt.plot(theta,posterior) ###Output _____no_output_____ ###Markdown Interestingly enough we can now consider this last result as a new prior and repeat the process with a new observation. Consider a new head. ###Code prior = posterior # a new head likelihood = theta posterior = likelihood * prior posterior = posterior/np.sum(posterior) accum_likelihood = theta*theta accum_likelihood_normalized = accum_likelihood/np.sum(accum_likelihood) plt.plot(theta,posterior, label ="posterior") plt.plot(theta,accum_likelihood_normalized, label="Accum likelihood") ###Output _____no_output_____ ###Markdown This resembles the effect of *online methods*. Observe that the MAP value is different than the mean posterior and the MLE. They correspond to $$\hat{\theta}_{MAP} = \frac{a+N_1-1}{a+b+N-2} $$with a uniform prior, MAP is reduced to MLE,$$\hat{\theta}_{MLE} = \frac{N_1}{N} $$By contrast the posterior mean is $$\bar{\theta} = \frac{a+N_1}{a+b+N}$$ This example can be used to assess the probability and the credibility interval for a classifier comparison. When the outcomes are `classifier A beats classifier B` or the other way around. The Diritchlet-multinomial modelIn the case we have a multinomial random variable, for example, topics in a document text we have a probability simplex. The probability density function will operate over that simplex. LikelihoodSuppose we observe $N$ dice rolls, $D = \{x_1,\dots,x_N\}$ if we assume the data is iid, the likelihood has the form$$p(D|\theta) = \prod_{k=1}^K \theta_k^{N_k}$$where $N_k = \sum_{i=1}^N \mathbb{1}(y_i=k)$ is the number of times event $k$ occurred PriorSince the parameter vector lives in the K-dimensional probability simplex, we need a prior that has support over this simplex. The Dirichlet distribution ahs the following prior$$\text{Dir}(\theta|\alpha) = \frac{1}{B(\alpha)}\prod_{k=1}^K \theta_k^{\alpha_k-1}\mathbb{1}(x\in S_k)$$ Posterior ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt import matplotlib.tri as tri import functools corners = np.array([[0, 0], [1, 0], [0.5, 0.75**0.5]]) triangle = tri.Triangulation(corners[:, 0], corners[:, 1]) refiner = tri.UniformTriRefiner(triangle) trimesh = refiner.refine_triangulation(subdiv=4) # Mid-points of triangle sides opposite of each corner midpoints = [(corners[(i + 1) % 3] + corners[(i + 2) % 3]) / 2.0 for i in range(3)] def xy2bc(xy, tol=1.e-3): '''Converts 2D Cartesian coordinates to barycentric.''' s = [(corners[i] - midpoints[i]).dot(xy - midpoints[i]) / 0.75 for i in range(3)] return np.clip(s, tol, 1.0 - tol) class Dirichlet(object): def __init__(self, alpha): from math import gamma from operator import mul self._alpha = np.array(alpha) self._coef = gamma(np.sum(self._alpha)) / functools.reduce(mul, [gamma(a) for a in self._alpha]) def pdf(self, x): '''Returns pdf value for `x`.''' from operator import mul return self._coef * functools.reduce(mul, [xx ** (aa - 1) for (xx, aa)in zip(x, self._alpha)]) def draw_pdf_contours(dist, nlevels=200, subdiv=8, **kwargs): import math refiner = tri.UniformTriRefiner(triangle) trimesh = refiner.refine_triangulation(subdiv=subdiv) pvals = [dist.pdf(xy2bc(xy)) for xy in zip(trimesh.x, trimesh.y)] plt.tricontourf(trimesh, pvals, nlevels, **kwargs) plt.axis('equal') plt.xlim(0, 1) plt.ylim(0, 0.75**0.5) plt.axis('off') draw_pdf_contours(Dirichlet([3,2,5])) ###Output _____no_output_____ ###Markdown Naive Bayes 2.1 Basic document representationIn text classification, we are given a description $x \in {\bf R}^d$ of a document $\delta$ and a fixed set of classes $y \in \{c_1, \dots, c_K\}$, for example the document topic. Given a new document, our goal is to predict the most probable class.A very simple description of a document is the **bag-of-words** description. This representation transforms a complete text to a vector of $d$ predefined words. The set of predefined words is selected by the practicioner. For example, the list can consist of the set of all words in a given language. Example 1:Suppose we are given four different documents belonging to the topics $y=\{\text{'economics'},\text{'technology'}\}$ and we select as our representation the following bag-of-words $x = \{\text{'market'}, \text{'stock'}, \text{'price'}, \text{'application'}, \text{'mobile'}, \text{'google'}\}$. We can count the number of times a certain term appears in that document and expect that this description is discriminative enough for identifying the document topic. Check the following example:marketstockpriceapplicationmobilegoogledocument 1($\text{'economics'}$)123000document 2($\text{'economics'}$)012001document 3($\text{'technology'}$)000231document 4($\text{'technology'}$)101230In this representation, document 2 is represented by the vector (0,1,2,0,0,1). We can alternatively use a binary value representing whether a term appears or not in the document. In this last case document would be represesnted by (0,1,1,0,0,1).Observe that this is a context free representation, i.e. the order of the words is not considered. Consider the sentences "Google reduces the prices of applications in App market" and "The number of aplications in Google App market with cheap prices is reduced by 20%". The representation for both sentences is the same, though the exact meaning of both sentences is completely different. However, this kind of representation may be enough for identifying that both refers to $\text{'technology'}$. 2.2 The Naive Bayes classifierNaive Bayes is an instance of a Bayessian classifier. In this framework, the problem of classification consists of selecting the class with Maximum A Posteriori (MAP) probability, i.e. $$\hat{y} = \arg\max_y p(y|x).$$In order to find this quantity we use the Bayes equation,$$ p(x,y) = p(x|y)p(y) = p(y|x)p(x),$$and$$ p(y|x) = \frac{p(x|y)p(y)}{p(x)}.$$In order to compute the MAP the quantities $p(x|y)$, $p(y)$, $p(x)$ have to be estimated from observed data.In the problem of document classification, our goal is to select the class with MAP probability. For example, we will select the cathegory $\text{'economics'}$ for a text with description (1,1,1,0,0,0) only if $P(y = \text{'economics'}|x = (1,1,1,0,0,0)) > P(y = \text{'technology'}|x = (1,1,1,0,0,0))$. Note that $p(x)$ is a constant value and it does not affect the decision, thus we just need to compute$$P(y|x) \propto P(y)P(x|y)$$Estimating the likelihood term, $P(x|y)$, accounts for computing the probability of certain description vector in a given class, e.g. the probability of a text in $\text{'economics'}$ having a description $x = (1,1,1,0,0,0)$ (the value of the probability that a description x = (1,1,1,0,0,0) has inside the category $\text{'economics'}$), $p(x = (1,1,1,0,0,0)|y = \text{'economics'})$Up to this point, the description of the classifier is general for any Bayessian classifier. *Naive Bayes additionally assumes that $x$ is composed of a set of $d$ independent variables.* This allows to rewrite the likelihood term as$$p(x_1,x_2,...,x_N | y) = p(x_1|y)p(x_2|y)...p(x_N|y) = \prod\limits_{i=1}^N p(x_i|y)$$For example, in our case $$P(x = (1,1,1,0,0,0)|y = \text{'tech'}) = P(x_1=1|y = \text{'tech'})P(x_2=1|y = \text{'tech'})P(x_3=1|y = \text{'tech'})P(x_4=0|y = \text{'tech'})P(x_5=0|y = \text{'tech'})P(x_6=0|y = \text{'tech'})$$This is understood as the fact that the probability of a document described as x = (1,1,1,0,0,0) is described by the product of the probilities that the first to the third word are present, and the fourth to the sixth word are not.In the end, the Naive Bayes classifier has the following form,$$p(y|x) \propto p(y)\prod\limits_{i=1}^N p(x_i|y)$$In many cases the prior $p(y)$ is unknown or simply we prefer to use a non-informative prior (all documents have the same probability of appearance in our context ($p(y)$)). In that case the formulation is simplified to the Maximum Likelihood Estimate. 2.3 Estimating conditioned probabilities The last remaining step is the estimation of the individual conditional probabilities. There are two classical variants the **Multinomial Naive Bayes** and the **Bernoulli Naive Bayes**. The difference between both lies in the goal of what they are modeling. **In Multinomial NB we compute the probability of generating the observed document.** In this sense, we multiply the conditional probability of each word in the document for all words present in the document. An alternative view is the *Bernoulli model*. **In the Bernoulli Naive Bayes we compute the probability of the binary bag-of-words descriptor.** Observe that in the Bernouilli Naive Bayes the final probability depends on the words that appear in the document but also on the words that do not appear while in the multinomial NB it only depends on the words that appear. On the contrary, multinomial naive bayes takes into account the multiplicity of the words in the document while Bernoulli does not. Let us consider in this example the *Bernoulli model* that is consistent with our representation where a zero indicates a word is not present in the document and a one represents it is present. In order to estimate this probability we can use a frequentist approximation to probability, i.e. we will estimate the probability as the frequency of appearance of each term in each category. This computation divides the number documents where the word appears over the total number of documents. In our previous example, $p(x_3=1 (\text{the word 'price' appears})|y =\text{'tech'}) = 1/2$ and $p(x_3=1 (\text{the word 'price' appears})|y =\text{'eco'}) = 2/2$. This is computed by dividing the number of documents where the word price appear in a given category over the number of documents of that category. 2.3.1 The zero probability effectIn the former example the probability $p(x_5=1|y=\text{'eco'}) = 0$. This implies that if the word 'mobile' appears the document can not belong to the class $\text{'economy'}$. It is unreasonable to completely penalize a whole class by the appearance or not appearance of a single word. It is customary to assign to those cases a very low probability value instead. One well known approach to correct this effect is the so called **Laplace correction**. It is computed as follows,$$p(x_i=1 | y=c_k ) = \frac{\text{ of documents of class } c_k \text{ where word } x_i \text{ appears} + 1}{\text{ of documents of class } c_k + M}$$where $M$ is the amount of words in the description. 2.3.2 Underflow effectAs the number of words in the description increase there is a higher probability that many of those words will not be present in the document. The product of many very small values may lead to floating point underflow effects. For this reason it is usual to use the log probability instead. This transformation does not change the decision boundary. In our simplified case$$\log p(x|y) = \sum\limits_{i=1}^N \log p(x_i|y)$$ Let us code an even more simpler version of Naive Bayes, a Bernoulli generative version. This is just to even simplify more our code, so don't expect to work well. ###Code import numpy as np class SimpleNaiveBayes(): def __init__(self): self.prob_vec={} self.classes =[] def fit(self,X,y): # extract the probabilities for each element for different classes self.classes = np.unique(y) for c in self.classes: idx=np.where(y==c)[0] N = len(idx) self.prob_vec[c] = np.log(np.sum(X[idx,:],axis=0)/N+1e-10) def predict_proba(self,X): #Output matrix N_documents X N_classes probs = np.zeros((X.shape[0],len(self.classes))) i=0 for k in self.classes: probs[:,i]=np.dot(X,self.prob_vec[k].T).ravel() i+=1 return probs def predict(self,X): return self.classes[np.argmax(self.predict_proba(X),axis =1)] ###Output _____no_output_____ ###Markdown Let us try ###Code X=np.array([[1,1,1,0,0,0],[0,1,1,0,0,1],[0,0,0,1,1,1],[1,0,1,1,1,0]],dtype=np.float) y = [0,0,1,1] clf = SimpleNaiveBayes() clf.fit(X,y) clf.classes clf.prob_vec clf.predict_proba(X) clf.predict(X) #load data import pandas as pd data=pd.read_csv('./files/Boydstun_NYT_FrontPage_Dataset_1996-2006_0.csv') data.head() ###Output _____no_output_____ ###Markdown Let us split the data set in two set: + We will train the classifier with news up to 2004.+ We will test the classifier in news from 2005 and 2006. ###Code import numpy as np #Let us train the classifier with data up to 1/1/2004 and test its performnace in data from 2004-2006 split = pd.to_datetime(pd.Series(data['Date']))<pd.datetime(2004, 1, 1) raw_data = data['Title'] raw_train = raw_data[split] raw_test = raw_data[np.logical_not(split)] y = data['Topic_2digit'] y_train = y[split] y_test = y[np.logical_not(split)] print ('Check the split sizes, train, test and total amount of data:') print (raw_train.shape, raw_test.shape, raw_data.shape) print ('Display the labels:') print (np.unique(y)) # Let us tokenize the data from sklearn.feature_extraction.text import CountVectorizer # We use the count number of instances considering that a word has a minimum support of two documents vectorizer = CountVectorizer(min_df=2, # stop words such as 'and', 'the', 'of' are removed stop_words='english', strip_accents='unicode') #example of the tokenization test_string = raw_train[0] print ("Example: " + test_string +"\n") print ("Preprocessed: " + vectorizer.build_preprocessor()(test_string)+"\n") print ("Tokenized:" + str(vectorizer.build_tokenizer()(test_string))+"\n") print ("Analyzed data string:" + str(vectorizer.build_analyzer()(test_string))+"\n") #Process and convert data X_train = vectorizer.fit_transform(raw_train) X_test = vectorizer.transform(raw_test) print ("Number of tokens: " + str(len(vectorizer.get_feature_names())) +"\n") print ("Extract of tokens:") print (vectorizer.get_feature_names()[1000:1100]) %matplotlib inline X_train = X_train.todense() X_train = X_train.astype(np.float) X_test = X_test.todense() X_test = X_test.astype(np.float) y_train = np.array(y_train.tolist()) clf = SimpleNaiveBayes() clf.fit(X_train,y_train) y_hat = clf.predict(X_test) #from sklearn.naive_bayes import BernoulliNB #nb = BernoulliNB() #nb.fit(X_train,y_train) #y_hat = nb.predict(X_test) from sklearn import metrics import matplotlib.pyplot as plt def plot_confusion_matrix(y_pred, y): plt.imshow(metrics.confusion_matrix(y, y_pred), interpolation='nearest',cmap='gray') plt.colorbar() plt.ylabel('true value') plt.xlabel('predicted value') fig = plt.gcf() fig.set_size_inches(9,9) print ("classification accuracy:", metrics.accuracy_score(y_hat, y_test)) plot_confusion_matrix(y_hat, y_test) print ("Classification Report:") print (metrics.classification_report(y_hat,np.array(y_test))) %matplotlib inline #Fit a Bernoulli Naive Bayes from sklearn.naive_bayes import BernoulliNB nb = BernoulliNB() nb.fit(X_train,y_train) y_hat = nb.predict(X_test) from sklearn import metrics import matplotlib.pyplot as plt def plot_confusion_matrix(y_pred, y): plt.imshow(metrics.confusion_matrix(y, y_pred), interpolation='nearest',cmap='gray') plt.colorbar() plt.ylabel('true value') plt.xlabel('predicted value') fig = plt.gcf() fig.set_size_inches(9,9) print ("classification accuracy:", metrics.accuracy_score(y_hat, y_test)) plot_confusion_matrix(y_hat, y_test) print ("Classification Report:") print (metrics.classification_report(y_hat,np.array(y_test))) ###Output classification accuracy: 0.434899328859 Classification Report: precision recall f1-score support 1 0.32 0.64 0.43 56 2 0.01 0.67 0.01 3 3 0.51 0.65 0.57 343 4 0.00 0.00 0.00 0 5 0.01 1.00 0.01 1 6 0.13 0.96 0.23 27 7 0.00 0.00 0.00 0 8 0.00 0.00 0.00 0 10 0.00 0.00 0.00 0 12 0.46 0.43 0.44 466 13 0.00 0.00 0.00 0 14 0.00 0.00 0.00 0 15 0.09 0.54 0.16 57 16 0.54 0.57 0.55 1259 17 0.03 1.00 0.06 4 18 0.00 0.00 0.00 0 19 0.81 0.34 0.48 3544 20 0.75 0.45 0.57 1555 21 0.00 0.00 0.00 0 24 0.00 0.00 0.00 0 26 0.00 0.00 0.00 0 27 0.00 0.00 0.00 0 28 0.00 0.00 0.00 1 29 0.35 0.60 0.44 134 30 0.00 0.00 0.00 0 31 0.00 0.00 0.00 0 99 0.00 0.00 0.00 0 avg / total 0.70 0.43 0.51 7450 ###Markdown **QUESTION:** Identify the three most simple classes. ###Code #What are the top N most predictive features per class? N = 5 voc = vectorizer.get_feature_names() for i, label in enumerate(np.unique(y)): topN = np.argsort(nb.coef_[i])[-N:] print ('Code: '+ str(label) + ' Terms : '+ str([voc[i] for i in topN])) ###Output Code: 1 Terms : ['cut', 'bush', 'economy', 'budget', 'tax'] Code: 2 Terms : ['race', 'gay', 'new', 'court', 'abortion'] Code: 3 Terms : ['care', 'medicare', 'drug', 'health', 'new'] Code: 4 Terms : ['farm', 'safety', 'new', 'farmers', 'food'] Code: 5 Terms : ['workers', 'strike', 'union', 'immigrants', 'new'] Code: 6 Terms : ['students', 'city', 'new', 'school', 'schools'] Code: 7 Terms : ['rules', 'warming', 'air', 'new', 'pollution'] Code: 8 Terms : ['blackout', 'california', 'power', 'energy', 'oil'] Code: 10 Terms : ['new', 'security', '800', 'flight', 'crash'] Code: 12 Terms : ['drug', 'case', 'death', 'new', 'police'] Code: 13 Terms : ['plan', 'security', 'new', 'social', 'welfare'] Code: 14 Terms : ['city', 'homeless', 'york', 'rent', 'new'] Code: 15 Terms : ['new', 'billion', 'deal', 'enron', 'microsoft'] Code: 16 Terms : ['bush', 'challenged', 'war', 'iraq', 'nation'] Code: 17 Terms : ['space', 'nasa', 'loss', 'new', 'shuttle'] Code: 18 Terms : ['business', 'bush', 'clinton', 'china', 'trade'] Code: 19 Terms : ['mideast', 'war', 'israel', 'new', 'china'] Code: 20 Terms : ['2000', 'clinton', 'bush', 'president', 'campaign'] Code: 21 Terms : ['park', 'plan', 'zero', 'ground', 'new'] Code: 24 Terms : ['mayor', 'giuliani', 'city', 'budget', 'new'] Code: 26 Terms : ['blizzard', 'new', 'overview', 'hurricane', 'storm'] Code: 27 Terms : ['blaze', 'ferry', 'killed', 'crash', 'fires'] Code: 28 Terms : ['arts', 'tv', 'broadway', 'art', 'new'] Code: 29 Terms : ['world', 'playoffs', 'series', 'yankees', 'baseball'] Code: 30 Terms : ['87', '79', 'crash', 'dead', 'dies'] Code: 31 Terms : ['faith', 'bishop', 'new', 'church', 'pope'] Code: 99 Terms : ['editors', 'today', 'readers', 'note', 'special'] ###Markdown Let us check what would happen if we enrich the data set with the summary of the article. ###Code raw_data = data['Title']+data['Summary'] raw_train = raw_data[split] raw_test = raw_data[np.logical_not(split)] y = data['Topic_2digit'] y_train = y[split] y_test = y[np.logical_not(split)] # Let us tokenize the data from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(min_df=2, stop_words='english', strip_accents='unicode') #example test_string = raw_train[0] print ("Example: " + test_string +"\n") print ("Preprocessed: " + vectorizer.build_preprocessor()(test_string)+"\n") print ("Tokenized:" + str(vectorizer.build_tokenizer()(test_string))+"\n") print ("Analyzed data string:" + str(vectorizer.build_analyzer()(test_string))+"\n") #Fit and convert data X_train = vectorizer.fit_transform(raw_train) X_test = vectorizer.transform(raw_test) print ("\n") print ("Number of tokens: " + str(len(vectorizer.get_feature_names())) +"\n") print ("Extract of tokes:") print (vectorizer.get_feature_names()[1000:1100]) from sklearn.naive_bayes import BernoulliNB nb = BernoulliNB() nb.fit(X_train,y_train) y_hat = nb.predict(X_test) from sklearn import metrics import matplotlib.pyplot as plt def plot_confusion_matrix(y_pred, y): plt.imshow(metrics.confusion_matrix(y, y_pred), interpolation='nearest') plt.colorbar() plt.ylabel('true value') plt.xlabel('predicted value') fig = plt.gcf() fig.set_size_inches(9,9) print ("classification accuracy:", metrics.accuracy_score(y_hat, y_test)) plot_confusion_matrix(y_hat, y_test) print ("Classification Report:") print (metrics.classification_report(y_hat,np.array(y_test))) #What are the top N most predictive features per class? N = 5 voc = vectorizer.get_feature_names() for i, label in enumerate(np.unique(y)): topN = np.argsort(nb.coef_[i])[-N:] print ('Code: '+ str(label) + ' Terms : '+ str([voc[i] for i in topN])) ###Output Code: 1 Terms : ['cut', 'economy', 'market', 'budget', 'tax'] Code: 2 Terms : ['gay', 'race', 'new', 'court', 'abortion'] Code: 3 Terms : ['medicare', 'care', 'drug', 'new', 'health'] Code: 4 Terms : ['disease', 'farm', 'new', 'farmers', 'food'] Code: 5 Terms : ['new', 'workers', 'strike', 'union', 'immigrants'] Code: 6 Terms : ['education', 'students', 'new', 'schools', 'school'] Code: 7 Terms : ['water', 'pollution', 'new', 'global', 'warming'] Code: 8 Terms : ['gas', 'prices', 'energy', 'oil', 'power'] Code: 10 Terms : ['investigation', '800', 'twa', 'flight', 'crash'] Code: 12 Terms : ['death', 'scandal', 'abuse', 'new', 'police'] Code: 13 Terms : ['security', 'clinton', 'social', 'new', 'welfare'] Code: 14 Terms : ['housing', 'york', 'rent', 'nyc', 'new'] Code: 15 Terms : ['new', 'merger', 'scandal', 'antitrust', 'microsoft'] Code: 16 Terms : ['bush', 'challenged', 'war', 'nation', 'iraq'] Code: 17 Terms : ['loss', 'columbia', 'space', 'shuttle', 'new'] Code: 18 Terms : ['deal', 'sanctions', 'clinton', 'china', 'trade'] Code: 19 Terms : ['war', 'peace', 'new', 'china', 'israel'] Code: 20 Terms : ['2000', 'bush', 'president', 'clinton', 'campaign'] Code: 21 Terms : ['memorial', 'zero', 'ground', 'indian', 'new'] Code: 24 Terms : ['city', 'budget', 'governor', 'mayor', 'new'] Code: 26 Terms : ['hurricane', 'york', 'storm', 'new', 'weather'] Code: 27 Terms : ['fires', 'killed', 'accident', 'crash', 'new'] Code: 28 Terms : ['york', 'day', 'museum', 'art', 'new'] Code: 29 Terms : ['world', 'playoffs', 'series', 'yankees', 'baseball'] Code: 30 Terms : ['plane', 'death', 'crash', 'dead', 'dies'] Code: 31 Terms : ['catholic', 'religious', 'new', 'pope', 'church'] Code: 99 Terms : ['park', 'editors', 'new', 'special', 'note'] ###Markdown And now try with our own Simple Naive Bayes ###Code %matplotlib inline X_train = X_train.todense() X_train = X_train.astype(np.float) X_test = X_test.todense() X_test = X_test.astype(np.float) y_train = np.array(y_train.tolist()) clf = SimpleNaiveBayes() clf.fit(X_train,y_train) y_hat = clf.predict(X_test) #from sklearn.naive_bayes import BernoulliNB #nb = BernoulliNB() #nb.fit(X_train,y_train) #y_hat = nb.predict(X_test) from sklearn import metrics import matplotlib.pyplot as plt def plot_confusion_matrix(y_pred, y): plt.imshow(metrics.confusion_matrix(y, y_pred), interpolation='nearest',cmap='gray') plt.colorbar() plt.ylabel('true value') plt.xlabel('predicted value') fig = plt.gcf() fig.set_size_inches(9,9) print ("classification accuracy:", metrics.accuracy_score(y_hat, y_test)) plot_confusion_matrix(y_hat, y_test) print ("Classification Report:") print (metrics.classification_report(y_hat,np.array(y_test))) ###Output classification accuracy: 0.505637583893 Classification Report: precision recall f1-score support 1 0.50 0.46 0.48 125 2 0.29 0.47 0.36 174 3 0.59 0.60 0.60 428 4 0.20 0.46 0.28 13 5 0.29 0.39 0.33 140 6 0.45 0.57 0.51 158 7 0.32 0.47 0.38 38 8 0.26 0.68 0.38 28 10 0.26 0.46 0.33 70 12 0.50 0.36 0.42 619 13 0.16 0.28 0.21 25 14 0.10 0.38 0.15 37 15 0.30 0.43 0.36 233 16 0.60 0.53 0.57 1519 17 0.30 0.50 0.38 86 18 0.16 0.21 0.18 19 19 0.68 0.46 0.55 2218 20 0.65 0.62 0.64 978 21 0.16 0.37 0.22 30 24 0.36 0.46 0.40 118 26 0.31 0.79 0.44 73 27 0.11 0.38 0.17 8 28 0.30 0.49 0.37 106 29 0.50 0.77 0.61 150 30 0.22 0.82 0.35 17 31 0.12 0.52 0.20 25 99 0.04 0.13 0.06 15 avg / total 0.56 0.51 0.52 7450
book1/figures/chapter21_figures.ipynb
###Markdown Figure 21.1: Three clusters with labeled objects inside. ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/clusterPurity.png") ###Output _____no_output_____ ###Markdown Figure 21.2: (a) An example of single link clustering using city block distance. Pairs (1,3) and (4,5) are both distance 1 apart, so get merged first. (b) The resulting dendrogram. Adapted from Figure 7.5 of [Alp04] . Figure(s) generated by [agglomDemo.m](https://github.com/probml/pmtk3/blob/master/demos/agglomDemo.m) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/agglomDemoData.png") pmlt.show_image("/pyprobml/book1/figures/images/agglomDemoDendrogram.png") ###Output _____no_output_____ ###Markdown Figure 21.3: Illustration of (a) Single linkage. (b) Complete linkage. (c) Average linkage. ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/agglomNearest.png") pmlt.show_image("/pyprobml/book1/figures/images/agglomFurthest.png") pmlt.show_image("/pyprobml/book1/figures/images/agglomAvg.png") ###Output _____no_output_____ ###Markdown Figure 21.4: Hierarchical clustering of yeast gene expression data. (a) Single linkage. (b) Complete linkage. (c) Average linkage. Figure(s) generated by [hclustYeastDemo.m](https://github.com/probml/pmtk3/blob/master/demos/hclustYeastDemo.m) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/clusterYeastSingleLink.png") pmlt.show_image("/pyprobml/book1/figures/images/clusterYeastCompleteLink.png") pmlt.show_image("/pyprobml/book1/figures/images/clusterYeastAvgLink.png") ###Output _____no_output_____ ###Markdown Figure 21.5: (a) Some yeast gene expression data plotted as a heat map. (b) Same data plotted as a time series. Figure(s) generated by [kmeansYeastDemo.m](https://github.com/probml/pmtk3/blob/master/demos/kmeansYeastDemo.m) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/yeastHeatMap.png") pmlt.show_image("/pyprobml/book1/figures/images/yeastTimeSeries.png") ###Output _____no_output_____ ###Markdown Figure 21.6: Hierarchical clustering applied to the yeast gene expression data. (a) The rows are permuted according to a hierarchical clustering scheme (average link agglomerative clustering), in order to bring similar rows close together. (b) 16 clusters induced by cutting the average linkage tree at a certain height. Figure(s) generated by [hclustYeastDemo.m](https://github.com/probml/pmtk3/blob/master/demos/hclustYeastDemo.m) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/yeastClustergram.png") pmlt.show_image("/pyprobml/book1/figures/images/yeastClustergram16.png") ###Output _____no_output_____ ###Markdown Figure 21.7: Illustration of K-means clustering in 2d. We show the result of using two different random seeds. Adapted from Figure 9.5 of [Aur19] . Figure(s) generated by [kmeans_voronoi.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_voronoi.py) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_and_run("/pyprobml/scripts/kmeans_voronoi.py") ###Output _____no_output_____ ###Markdown Figure 21.8: Clustering the yeast data from \cref fig:yeast using K-means clustering with $K=16$. (a) Visualizing all the time series assigned to each cluster. (d) Visualizing the 16 cluster centers as prototypical time series. Figure(s) generated by [kmeansYeastDemo.m](https://github.com/probml/pmtk3/blob/master/demos/kmeansYeastDemo.m) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/yeastKmeans16.png") pmlt.show_image("/pyprobml/book1/figures/images/clusterYeastKmeansCentroids16.png") ###Output _____no_output_____ ###Markdown Figure 21.9: An image compressed using vector quantization with a codebook of size $K$. (a) $K=2$. (b) $K=4$. Figure(s) generated by [vqDemo.m](https://github.com/probml/pmtk3/blob/master/demos/vqDemo.m) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/vqDemoClown2.png") pmlt.show_image("/pyprobml/book1/figures/images/vqDemoClown4.png") ###Output _____no_output_____ ###Markdown Figure 21.10: Illustration of batch vs mini-batch K-means clustering on the 2d data from \cref fig:kmeansVoronoi . Left: distortion vs $K$. Right: Training time vs $K$. Adapted from Figure 9.6 of [Aur19] . Figure(s) generated by [kmeans_minibatch.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_minibatch.py) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_and_run("/pyprobml/scripts/kmeans_minibatch.py") ###Output _____no_output_____ ###Markdown Figure 21.11: Performance of K-means and GMM vs $K$ on the 2d dataset from \cref fig:kmeansVoronoi . (a) Distortion on validation set vs $K$. Figure(s) generated by [kmeans_silhouette.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_silhouette.py) [gmm_2d.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_2d.py) [kmeans_silhouette.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_silhouette.py) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_and_run("/pyprobml/scripts/kmeans_silhouette.py") pmlt.show_and_run("/pyprobml/scripts/gmm_2d.py") pmlt.show_and_run("/pyprobml/scripts/kmeans_silhouette.py") ###Output _____no_output_____ ###Markdown Figure 21.12: Voronoi diagrams for K-means for different $K$ on the 2d dataset from \cref fig:kmeansVoronoi . Figure(s) generated by [kmeans_silhouette.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_silhouette.py) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_and_run("/pyprobml/scripts/kmeans_silhouette.py") ###Output _____no_output_____ ###Markdown Figure 21.13: Silhouette diagrams for K-means for different $K$ on the 2d dataset from \cref fig:kmeansVoronoi . Figure(s) generated by [kmeans_silhouette.py](https://github.com/probml/pyprobml/blob/master/scripts/kmeans_silhouette.py) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_and_run("/pyprobml/scripts/kmeans_silhouette.py") ###Output _____no_output_____ ###Markdown Figure 21.14: Some data in 2d fit using a GMM with $K=5$ components. Left column: marginal distribution $p(\mathbf x )$. Right column: visualization of each mixture distribution, and the hard assignment of points to their most likely cluster. (a-b) Full covariance. (c-d) Tied full covariance. (e-f) Diagonal covairance, (g-h) Spherical covariance. Color coding is arbitrary. Figure(s) generated by [gmm_2d.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_2d.py) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_and_run("/pyprobml/scripts/gmm_2d.py") ###Output _____no_output_____ ###Markdown Figure 21.15: Some 1d data, with a kernel density estimate superimposed. Adapted from Figure 6.2 of [Mar18] . Figure(s) generated by [gmm_identifiability_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_identifiability_pymc3.py) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_and_run("/pyprobml/scripts/gmm_identifiability_pymc3.py") ###Output _____no_output_____ ###Markdown Figure 21.16: Illustration of the label switching problem when performing posterior inference for the parameters of a GMM. We show a KDE estimate of the posterior marginals derived from 1000 samples from 4 HMC chains. (a) Unconstrained model. Posterior is symmetric. (b) Constrained model, where we add a penalty to ensure $\mu _0 [Mar18] . Figure(s) generated by [gmm_identifiability_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_identifiability_pymc3.py) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_and_run("/pyprobml/scripts/gmm_identifiability_pymc3.py") ###Output _____no_output_____ ###Markdown Figure 21.17: Fitting GMMs with different numbers of clusters $K$ to the data in \cref fig:gmmIdentifiabilityData . Black solid line is KDE fit. Solid blue line is posterior mean; feint blue lines are posterior samples. Dotted lines show the individual Gaussian mixture components, evaluated by plugging in their posterior mean parameters. Adapted from Figure 6.8 of [Mar18] . Figure(s) generated by [gmm_chooseK_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_chooseK_pymc3.py) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_and_run("/pyprobml/scripts/gmm_chooseK_pymc3.py") ###Output _____no_output_____ ###Markdown Figure 21.18: WAIC scores for the different GMMs. The empty circle is the posterior mean WAIC score for each model, and the black lines represent the standard error of the mean. The solid circle is the in-sample deviance of each model, i.e., the unpenalized log-likelihood. The dashed vertical line corresponds to the maximum WAIC value. The gray triangle is the difference in WAIC score for that model compared to the best model. Adapted from Figure 6.10 of [Mar18] . Figure(s) generated by [gmm_chooseK_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_chooseK_pymc3.py) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_and_run("/pyprobml/scripts/gmm_chooseK_pymc3.py") ###Output _____no_output_____ ###Markdown Figure 21.19: We fit a mixture of 20 Bernoullis to the binarized MNIST digit data. We visualize the estimated cluster means $ \boldsymbol \mu _k$. The numbers on top of each image represent the estimated mixing weights $ \pi _k$. No labels were used when training the model. Figure(s) generated by [mixBerMnistEM.m](https://github.com/probml/pmtk3/blob/master/demos/mixBerMnistEM.m) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/mixBernoulliMnist20.png") ###Output _____no_output_____ ###Markdown Figure 21.20: Clustering data consisting of 2 spirals. (a) K-means. (b) Spectral clustering. Figure(s) generated by [spectral_clustering_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/spectral_clustering_demo.py) ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_and_run("/pyprobml/scripts/spectral_clustering_demo.py") ###Output _____no_output_____ ###Markdown Figure 21.21: Illustration of biclustering. We show 5 of the 12 organism clusters, and 6 of the 33 feature clusters. The original data matrix is shown, partitioned according to the discovered clusters. From Figure 3 of [KTU06] . Used with kind permission of Charles Kemp. ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/IRManimals.png") ###Output _____no_output_____ ###Markdown Figure 21.22: (a) Example of biclustering. Each row is assigned to a unique cluster, and each column is assigned to a unique cluster. (b) Example of multi-clustering using a nested partition model. The rows can belong to different clusters depending on which subset of column features we are looking at. ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/biclustering.png") pmlt.show_image("/pyprobml/book1/figures/images/multi-clustering.png") ###Output _____no_output_____ ###Markdown Figure 21.23: MAP estimate produced by the crosscat system when applied to a binary data matrix of animals (rows) by features (columns). See text for details. From Figure 7 of [Sha+06] . Used with kind permission of Vikash Mansingkha. ###Code #@title Setup { display-mode: "form" } %%time # If you run this for the first time it would take ~25/30 seconds !git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/probml/colab_powertoys.git &> /dev/null !pip3 install nbimporter -qqq %cd -q /content/colab_powertoys from colab_powertoys.probml_toys import probml_toys as pmlt %cd -q /pyprobml/scripts pmlt.show_image("/pyprobml/book1/figures/images/crossCatFig7.png") ###Output _____no_output_____
DAY 401 ~ 500/DAY453_[BaekJoon] 큰 수 A+B (Python).ipynb
###Markdown 2021년 8월 14일 토요일 BaekJoon - 큰 수 A+B (Python) 문제 : https://www.acmicpc.net/problem/10757 블로그 : https://somjang.tistory.com/entry/BaekJoon-10757%EB%B2%88-%ED%81%B0-%EC%88%98-AB-Python Solution ###Code def big_sum(num1, num2): return num1 + num2 if __name__ == "__main__": num1, num2 = map(int, input().split()) print(big_sum(num1, num2)) ###Output _____no_output_____
examples/Simple-Example.ipynb
###Markdown Network-TMLE: a simple exampleHere, we demonstrate a simple application and simulation study of the targeted maximum likelihood estimator (TMLE) for network-dependent data (network-TMLE). For this example, we use the data generating mechanism of Sofrygin and van der Laan (2017).There are three variables each individual has: exposure ($A$), outcome ($W$), and the a baseline covariate ($W$). Here, all variables are binary. We also have measured the network with the following adjacency matrix $\mathcal{G}$. For the $n$ units in the network, we define two summary measures:$$ W_i^s = \sum_{j=1}^n W_j \mathcal{G}_{ij} $$$$ A_i^s = \sum_{j=1}^n A_j \mathcal{G}_{ij} $$Now, we can discuss the data generating mechanism for this example$$ \text{logit}(\Pr(A_i = 1 | W_i, W_i^s)) = -1.2 + 1.5 W_i 0.6 W_i^s $$$$ \text{logit}(\Pr(Y_i = 1 | A_i, A_i^s, W_i, W_i^s)) = -2.5 + 0.5 A_i + 1.5 A_i^s + 1.5 W_i + 1.5 W_i^s $$As shown here, $Y_i$ depends on a summary measure (a simple count) of unit $i$'s immediate contacts' exposure and baseline covariate. Therefore, we have network-dependence and need to use network-TMLE to estimate the mean under a policy.Here, we will consider 3 different policies. All policies are stochastic, in that they assign each unit a probability of exposure between 0 and 1. The policies are $\omega_1 = \Pr^*(A_i) = 0.2$, $\omega_2 = 0.5$, and $\omega_3 = 0.8$. Conditional policies could also be considered, but here we focus on marginal policies.The following section setups the necessary libraries ###Code import numpy as np import pandas as pd import networkx as nx import matplotlib.pyplot as plt from mossspider import NetworkTMLE from mossspider.dgm import uniform_network, generate_observed, generate_truth ###Output _____no_output_____ ###Markdown Generating example data First, we will use `mossspider` to simulate a uniform random graph for us. This network (and the distribution of $W$) will be held as fixed throughout. See details on the estimand for why this is the case.Here, the uniform network will consist of $n=500$ units with units having degrees of $F_i = \{1,2,3,4\}$. ###Code G = uniform_network(n=500, # Number of nodes degree=[1, 4], # Min and Max degree seed=2022) # Seed for consistency # Generating color map based on W color_map = [] # Empty list for colors for node, data in G.nodes(data=True): # For loop over nodes if data['W'] == 1: # If W=1 color_map.append('blue') # ... set color to blue else: # If W=0 color_map.append('green') # ... set color to green # Drawing the network nx.draw(G, node_size=20, # Setting node size node_color=color_map) # Setting colors ###Output _____no_output_____ ###Markdown Next, we need to generate some observed values for units' exposure and outcome in the network. We can accomplish this using another built-in function which uses the previous data generating mechanism and adds the exposure data to our network. ###Code H = generate_observed(G, seed=202203) # Extract A and Y from network a_list, y_list = [], [] for node, data in G.nodes(data=True): # For loop over nodes a_list.append(data['A']) y_list.append(data['Y']) # Proportions for A and Y print("A:", np.mean(a_list)) print("Y:", np.mean(y_list)) ###Output A: 0.468 Y: 0.646 ###Markdown Applying Network-TMLENow that we have a sample data set, we can apply network-TMLE to it. In `targentula`, network-TMLE is implemented in the `NetworkTMLE` class. To initialize the class object, we need to provide it a `networkx.Graph` with all the assigned variables (done internally by functions used to generate data), the name of the exposure variable, and the name of the outcome variable. We can do this via: ###Code ntmle = NetworkTMLE(network=H, # NetworkX graph exposure='A', # Exposure in graph outcome='Y') # Outcome in graph ###Output _____no_output_____ ###Markdown Internally, `NetworkTMLE` extracts the adjacency matrix from the network, extracts the variables aligned with the adjacency matrix, and calculates the available summary measures. We can view the generated data set like so ###Code ntmle.df # Peaking at the generated data set ###Output _____no_output_____ ###Markdown The next step of applying network-TMLE is to specify the nuisance models. Nuisance Models Exposure Nuisance ModelTo start, we will specify the exposure nuisance models. The exposure nuisance models are used to estimate the inverse probability weights (IPW). IPW take the following form$$ \frac{\pi^*(W_i,W_i^s, \delta^*, \gamma^*)}{\pi(W_i,W_i^s, \delta, \gamma)} = \frac{\Pr(A_i,A_i^s | W_i, W_i^s; \delta^*, \gamma^*)}{\Pr(A_i,A_i^s | W_i, W_i^s; \delta, \gamma)} $$To make this simpler, we factor the probabilities as$$ \frac{\pi^*(W_i,W_i^s, \delta^*, \gamma^*)}{\pi(W_i,W_i^s, \delta, \gamma)} = \frac{\Pr(A_i, | W_i, W_i^s; \delta^*) \Pr(A_i, | A_i, W_i, W_i^s; \gamma^*)}{\Pr(A_i | W_i, W_i^s; \delta) \Pr(A_i^s | A_i, W_i, W_i^s; \gamma)} $$Therefore, there are (at least) two models to estimate. For now, we just need to specify the parametric model with network-TMLE.To specify the exposure nuisance model, we actually need to specify two separate models here. We can do this via ###Code # Model for Pr(A | W, W^s; \delta) ntmle.exposure_model(model="W + W_sum") # Parametric model # Model for Pr(A^s | A, W, W^s; \gamma) ntmle.exposure_map_model(model='A + W + W_sum', # Parametric model measure='sum', # Summary measure for A^s distribution='poisson') # Model distribution to use ###Output _____no_output_____ ###Markdown For the `.exposure_map_model`, you will notice there are two additional arguments. This tells `NetworkTMLE` two key pieces of information: the summary measure to use for $A^s$ and the distribution to use for the model. Here, we are telling `NetworkTMLE` to use the `'A_sum'` measure ($\sum_{j=1}^n A_j \mathcal{G}_{ij}$) and to use a Poisson GLM (since `'A_sum'` is a count variable). Outcome Nuisance ModelNext, we need to specify the outcome nuisance model. Unlike the exposure nuisance model, we don't need to factor. Instead, we use a single model for $$E[Y_i | A_i, A_i^s, W_i, W_i^s; \alpha]$$We can specify the parametric form of this model via ###Code # Model for E[Y | A, A^s, W, W^s; \alpha] ntmle.outcome_model(model='A + A_sum + W + W_sum') ###Output _____no_output_____ ###Markdown Targeting and EvaluationNow that our nuisance models are defined, we can now estimate parameter of interest. This is all done via the `.fit()` function. However, the `.fit()` function is doing a lot behind the scenes. First, the IPW are calculated. The numerator for these weights is done using a simulation approach. Next the outcome model are targeted via IPW. Then the outcome model is used to generate predictions under the policy and are subsequently targeted. Because stochastic policies have randomness, we handle the randomness with a Monte Carlo integration procedure. Lastly, we calculate the variance and confidence intervals using the influence curve.To begin, we will estimate the mean under $\omega_1$. Here, we will use 500 samples for the IPW estimation and the Monte Carlo integration (in general, the larger the number of samples the better the approximation) ###Code # Estimation ntmle.fit(p=0.2, # Policy samples=500, # replicates for MC integration seed=20220316) # seed for consistency # Displaying results ntmle.summary(decimal=4) ###Output ====================================================================== Network Targeted Maximum Likelihood Estimator ====================================================================== Treatment: A No. Observations: 500 Outcome: Y No. Background Nodes: 0 Q-Model: Logistic No. IPW Truncated: 0 g-Model: Logistic No. Resamples: 500 gs-Model: Poisson g-Distribution: Poisson ====================================================================== Mean under policy: 0.4404 ---------------------------------------------------------------------- Variance Estimates ---------------------------------------------------------------------- Conditional: Direct-Only SE : 0.022 95.0% CL: [0.3972 0.4836] Conditional: Direct & Latent SE : 0.0217 95.0% CL: [0.3979 0.483 ] ====================================================================== ###Markdown We can repeat the same process for the other $\omega$ ###Code # Estimation ntmle.fit(p=0.5, # Policy samples=500, # replicates for MC integration seed=2022) # seed for consistency # Displaying results ntmle.summary(decimal=4) # Estimation ntmle.fit(p=0.8, # Policy samples=500, # replicates for MC integration seed=2022) # seed for consistency # Displaying results ntmle.summary(decimal=4) ###Output ====================================================================== Network Targeted Maximum Likelihood Estimator ====================================================================== Treatment: A No. Observations: 500 Outcome: Y No. Background Nodes: 0 Q-Model: Logistic No. IPW Truncated: 0 g-Model: Logistic No. Resamples: 500 gs-Model: Poisson g-Distribution: Poisson ====================================================================== Mean under policy: 0.8281 ---------------------------------------------------------------------- Variance Estimates ---------------------------------------------------------------------- Conditional: Direct-Only SE : 0.0153 95.0% CL: [0.7981 0.8582] Conditional: Direct & Latent SE : 0.015 95.0% CL: [0.7987 0.8575] ====================================================================== ###Markdown As the assigned probability under $\omega$ increased, the mean became larger here. Therefore, if $Y$ was a beneficial outcome, we would prefer policies that increased $A$. If $Y$ was instead harmful, we would prefer policies where $A$ was mitigated. SimulationAs a quick demonstration of the performance of network-TMLE, we now conduct a simulation study using the same mechanism and evaluating at the same $\omega$. First, we will simulate the truth or reference values. This can easily be done using a built-in function ###Code # Setup values to evaluate at omega = [0.2, 0.5, 0.8] truth = {} # Calculate truth or reference values for p in omega: # For every omega true_p = [] # Empty storage for i in range(5000): # Sim 10k times y_mean = generate_truth(graph=G, p=p) true_p.append(y_mean) truth[p] = np.mean(true_p) # update dict to have true print(truth) ###Output {0.2: 0.4929396000000001, 0.5: 0.665762, 0.8: 0.7947992} ###Markdown Now we can generate a bunch of observations, run network-TMLE on each iteration, and the evaluate the bias and confidence interval coverage. Here, 200 simulations are done for the previously generated network ###Code # Setup simulation result storage bias, coverage = {}, {} for p in omega: bias[p] = [] coverage[p] = [] # Running the simulation for i in range(200): # Generating A & Y data for the observational mechanism K = generate_observed(G, seed=None) # seed is None, so changes each iteration # Specifying network-TMLE ntmle = NetworkTMLE(network=K, # NetworkX graph exposure='A', # Exposure outcome='Y') # Outcome ntmle.exposure_model(model="W + W_sum") # Parametric model ntmle.exposure_map_model(model='A + W + W_sum', # Parametric model measure='sum', # A^s specification distribution='poisson') # Model distribution ntmle.outcome_model(model='A + A_sum + W + W_sum') # Parametric model # Estimating the mean and confidence intervals for each policy for p in omega: ntmle.fit(p=p, # Policy samples=500) # replicates bias[p].append(ntmle.marginal_outcome - truth[p]) ci = ntmle.conditional_latent_ci if ci[0] < truth[p] < ci[1]: coverage[p].append(1) else: coverage[p].append(0) print("Results") print("=======================") for p in omega: print("-----------------------") print("Omega: ", p) print("Bias: ", np.round(np.mean(bias[p]), 3)) print("Coverage:", np.round(np.mean(coverage[p]), 3)) print("=======================") ###Output Results ======================= ----------------------- Omega: 0.2 Bias: -0.002 Coverage: 0.805 ----------------------- Omega: 0.5 Bias: -0.001 Coverage: 0.95 ----------------------- Omega: 0.8 Bias: -0.0 Coverage: 0.945 =======================
Ray_tutorial/1. Ray_Simple_Turorial.ipynb
###Markdown Ray Installation Reference Link 해당 노트북은 아래의 reference를 참고하면서 작성하였으며, 참고 링크에는 좀 더 정성적인 특징을 기술해놓았습니다. This notebook is written with the following references, which state more detailed characteristics of the Ray package. https://data-newbie.tistory.com/415, https://towardsdatascience.com/modern-parallel-and-distributed-python-a-quick-tutorial-on-ray-99f8d70369b8 Ray Tutorial ###Code import sys IN_COLAB = "google.colab" in sys.modules if IN_COLAB: !pip install ray ###Output _____no_output_____ ###Markdown ![image](https://drive.google.com/uc?id=1HBbLD02L-o4_oOLQxEx6MvKKiFcbEBhP) Ray는 사용하기 위해서 import 뿐만 아니라, ray.init()이 필요합니다. To use Ray, one needs to write as follows after importing Ray module for the initialization. import ray ray.init() ###Code import ray import time import numpy as np ray.init() ###Output 2021-03-22 14:43:19,392 INFO services.py:1173 -- View the Ray dashboard at http://127.0.0.1:8265 ###Markdown 실행 후 localhost: port번호 가 출력되는 것을 볼 수 있는데요, 이 주소는 resource가 어떤 식으로 쓰이고 있는지 보여줍니다. After excuting it, one can see the address as "Local IP address: ". This is where one can monitor the status of the server's resourse. ###Code # For GPU users ''' import torch ray.init(num_gpus=2) # 'num_gpus' in ray.init() means the number of gpus to be used in the whole producedures @ray.remote(num_gpus=2) # 'num_gpus' in ray.remote() means the ratio of gpu memory to be occupied when the corresponding class('GPUActor' in this case) is called. class GPUActor(object): def __init__(self): a = torch.nn.Linear(10, 10).cuda() b = torch.ones(1,10).cuda() print(a(b)) # The four tasks created here can execute concurrently. [GPUActor.remote() for _ in range(2)] ''' ###Output _____no_output_____ ###Markdown 지금부터는 구체적인 활용법을 다루어보겠습니다. 병렬처리를 하고자하는 함수를 @ray.remote라는 decoration을 통해 다음과 같이 선언합니다. @ray.remote def f(x): time.sleep(5) return x * x 이렇게 선언하고 나면, 이제 함수는 함수명.remote() 와 같이 .remote()를 붙여야만 호출할 수 있게 됩니다. 여기서, remote()는 task를 하나의 thread에 던져주는 역할을 하고, 그 메소드가 실행이 완료될 때까지 기다리지 않습니다. 따라서 코드를 실행시켜나갈 때 remote() 가 있는 라인은 .remote()로 실행시킨 라인에서 결과를 얻지 못해도 바로 다음 줄로 넘어갑니다. 즉, results = [] for i in range(10): results.append(f.remote(i)) 이렇게 실행하면 함수f의 결과를 기다리지 않고, 그냥 thread에 던져놓고 바로바로 반복문을 수행하기 때문에 순식간에 10번의 loop이 끝납니다. ray.get(results) 결과를 얻기 위해서는 ray.get()를 이용하여야 하는데요. 위와 같이 실행하면, 모든 thread의 실행이 끝났을 때 출력을 얻습니다. ###Code # 병렬 처리하고 싶은 함수가 있으면, 아래의 데코레이터로 지정하면 병렬 모드로 쓸 수 있습니다. @ray.remote def f(x): time.sleep(5) return x * x # number of workers 변수를 통해서 위에서 선언한 함수를 몇 개를 동시에 실행시킬지 정하고 있습니다. number_of_workers = 5 tic = time.time() results = [f.remote(i) for i in range(number_of_workers)] print(ray.get(results)) print("총 걸린 시간: ", time.time()-tic) number_of_workers = 40 tic = time.time() results = [f.remote(i) for i in range(number_of_workers)] print(ray.get(results)) print("총 걸린 시간: ", time.time()-tic) # 원래 병렬처리가 없었다면, 40번 함수가 호출되었으므로 40*5=200 초가 걸렸겠지만, 여러 thread가 동시에 병렬 처리하여 훨씬 빠르게 수행되는 것을 확인할 수 있습니다. ###Output _____no_output_____ ###Markdown 또한 Ray는 특정 데이터를 공유 메모리에 저장하고, 그 데이터를 thread 간에 공유가 가능합니다. ###Code # ray.put 함수를 이용해서 공유하고자 하는 객체를 지정할 수 있습니다. # 이렇게 지정한 객체는 여러 함수가 접근하여 필요한 처리를 할 수 있고, 따라서 메모리를 매우 효율적으로 쓸 수 있다는 장점이 있습니다. import numpy as np import psutil import scipy.signal @ray.remote def f(image, random_filter): # Do some image processing. return scipy.signal.convolve2d(image, random_filter)[::5, ::5] num_of_workers = 12 filters = [np.random.normal(size=(4, 4)) for _ in range(num_of_workers)] tic = time.time() for _ in range(10): image = np.zeros((3000, 3000)) image_id = ray.put(image) # 공유메모리에 올리는 선언 results = [f.remote(image_id, filters[i]) for i in range(num_of_workers)] ray.get(results) print("걸린시간(s): ", time.time() - tic) ###Output 걸린시간(s): 8.013531684875488 ###Markdown 시간을 비교하기 위해서 이번에는 ray 없이 for문을 수행합니다 ###Code def f(image, random_filter): return scipy.signal.convolve2d(image, random_filter)[::5, ::5] num_of_workers = 4 filters = [np.random.normal(size=(4, 4)) for _ in range(num_of_workers)] tic = time.time() for _ in range(10): image = np.zeros((3000, 3000)) results = [f(image, filters[i]) for i in range(num_of_workers)] print("걸린시간(s): ", time.time() - tic) ###Output _____no_output_____ ###Markdown 정확히 num_of_workes의 배수만큼 느려진 것은 아니지만 확연한 차이를 확인할 수 있는데 특히 image라는 배열 변수 복제할 필요없이, 여러 thread가 ray.put() 메소드로 쉽게 공유하고 access할 수 있다는 점은 눈여겨볼 만한 포인트입니다. ###Code @ray.remote def create_matrix(size, num): time.sleep(num) return np.random.normal(size=size) @ray.remote def multiply_matrices(x, y): return np.dot(x, y) x_id = create_matrix.remote([1000, 1000], 6) y_id = create_matrix.remote([1000, 1000], 2) z_id = multiply_matrices.remote(x_id, y_id) # 아래의 걸린 시간을 보면, x가 늦게 끝나므로 x가 종료되는 시점에 z가 계산됨을 알 수 있습니다. tic = time.time() z = ray.get(z_id) print("걸린시간(s): ", time.time() - tic) ###Output _____no_output_____ ###Markdown 이번에는 다음과 같은 연산 그래프를 실현해보겠습니다. 왼쪽 그림과 오른쪽 그림의 연산 속도는 얼마나 차이가 날까요? ![image](https://drive.google.com/uc?id=1HG_mnwXO4mtG-ih2Kr3Kdd_MfNx0Esvq) ###Code # 위의 그림에 있는 연산처리 순서대로 코드를 테스트 해본 것. @ray.remote def add(x, y): time.sleep(1) return x + y # 먼저 왼쪽에 있는 흐름대로 add 연산을 해보겠습니다. 이 방식은 더하는 숫자가 n개 일 때, O(n)의 계산량을 필요로 합니다. tic = time.time() id1 = add.remote(1, 2) id2 = add.remote(id1, 3) id3 = add.remote(id2, 4) id4 = add.remote(id3, 5) id5 = add.remote(id4, 6) id6 = add.remote(id5, 7) id7 = add.remote(id6, 8) result = ray.get(id7) print("Result: ", result) print("걸린시간(s): ", time.time() - tic) # 먼저 왼쪽에 있는 흐름대로 add 연산을 해보겠습니다. 이 방식은 더하는 숫자가 n개 일 때, O(log(n))의 계산량이 들기 때문에, # 위의 방법보다 n이 커질 수록 매우 유용한 방법입니다. tic = time.time() id1 = add.remote(1, 2) id2 = add.remote(3, 4) id3 = add.remote(5, 6) id4 = add.remote(7, 8) id5 = add.remote(id1, id2) id6 = add.remote(id3, id4) id7 = add.remote(id5, id6) result = ray.get(id7) print("Result: ", result) print("걸린시간(s): ", time.time() - tic) ###Output result 36 걸린시간(s): 7.171598672866821 result 36 걸린시간(s): 3.0769569873809814 ###Markdown 아래는 위와 동일한 연산. 좀 더 간단하게 표현한 것. ###Code # 느린 것 values = [1, 2, 3, 4, 5, 6, 7, 8] while len(values) > 1: values = [add.remote(values[0], values[1])] + values[2:] result = ray.get(values[0]) # 빠른 것. # 코딩 요령: 리스트를 자르고, ray.remote를 리스트의 뒤로 넘긴 것. values = [1, 2, 3, 4, 5, 6, 7, 8] while len(values) > 1: values = values[2:] + [add.remote(values[0], values[1])] result = ray.get(values[0]) ###Output _____no_output_____ ###Markdown 이번에는 class를 ray를 이용하여 병렬로 처리해보겠습니다. ###Code @ray.remote class Counter(object): def __init__(self): self.n = 0 def increment(self, num): time.sleep(5) self.n += (num**3) print(self.n) def read(self): return self.n number_of_workers = 4 tic = time.time() counters = [Counter.remote() for i in range(number_of_workers)] [cnt_class.increment.remote(idx) for idx, cnt_class in enumerate(counters)] results = [c.read.remote() for c in counters] print(ray.get(results)) print("걸린시간(s): ", time.time() - tic) number_of_workers = 12 tic = time.time() counters = [Counter.remote() for i in range(number_of_workers)] [c.increment.remote(idx) for idx, c in enumerate(counters)] results = [c.read.remote() for c in counters] print(ray.get(results)) print("걸린시간(s): ", time.time() - tic) ###Output _____no_output_____ ###Markdown Multiprocessing 패키지는 어떤 output이 어느 thread에서 온 것인지 확인하려면 별도의 매소드로 확인해주어야 했지만, ray는 PID를 자체적으로 함께 출력해주기 때문에 어떤 thread에서 나온 출력인지 쉽게 확인할 수 있습니다. ###Code # 메세지를 저장하고 불러오는 class를 선언 @ray.remote class MessageActor(object): def __init__(self): self.messages = [] def add_message(self, message): self.messages.append(message) def get_and_clear_messages(self): messages = self.messages # time.sleep(0.2) self.messages = [] return messages # worker라는 함수는 메세지를 받아서 저장해주는 역할을 합니다. @ray.remote def worker(message_actor, j): for i in range(100): time.sleep(np.random.uniform(0.5, 1)) # 의도적으로 random 하게 시간을 기다리게 하여, 각 worker가 서로 다른 시간에 랜덤하게 message_actor에 접근하도록 해보았습니다. message_actor.add_message.remote("Message {} from worker {}.".format(i, j)) # 메세지 class 의 인스턴스 생성 message_actor = MessageActor.remote() # 위에서 생성한 클레스에 10개의 병렬 worker를 할당해보겠습니다. # 각 worker는 1개의 클래스 메서드(여기서는 message_actor의 add_message 메서드)를 실행하여 self.messages에 메세지를 계속 append합니다. num_of_workers = 10 [worker.remote(message_actor, j) for j in range(num_of_workers)] # for문을 돌면서 계속해서 massage를 가져옵니다. for _ in range(100): # 아래줄 처럼 실행하면 에러가 납니다. 그 이유는 @ray.remote 로 decorate이 되었으면 메소드를 실행할 때, .remote()를 뒤에 붙여야하기 때문입니다. # new_messages = message_actor.get_and_clear_messages() # # 올바른 예시 new_messages = ray.get(message_actor.get_and_clear_messages.remote()) print("New messages:", new_messages) time.sleep(1) # 위의 명령어를 실행하면 다음과 비슷한 출력이 나옵니다. # New messages: [] # New messages: ['Message 0 from worker 1.', 'Message 0 from worker 0.'] # New messages: ['Message 0 from worker 2.', 'Message 1 from worker 1.', 'Message 1 from worker 0.', 'Message 1 from worker 2.'] # New messages: ['Message 2 from worker 1.', 'Message 2 from worker 0.', 'Message 2 from worker 2.'] # New messages: ['Message 3 from worker 2.', 'Message 3 from worker 1.', 'Message 3 from worker 0.'] # New messages: ['Message 4 from worker 2.', 'Message 4 from worker 0.', 'Message 4 from worker 1.'] # New messages: ['Message 5 from worker 2.', 'Message 5 from worker 0.', 'Message 5 from worker 1.'] ###Output _____no_output_____ ###Markdown 위 코드를 통해, ray를 이용하여 분산 RL을 구현해본다고 했을 때 다음을 생각해볼 수 있습니다. 위에서 class는 두 가지 메소드를 가진다. 1) add_message와 2) get_and_clear_messages 로써 2개. 한편, worker 함수는 message_actor라는 클래스의 add_message 메소드를 실행하는 함수이며, 계속해서 self.messege라는 클래스 변수를 변동시켰다. 그리고 코드 아래 부분에서는 self.messege라는 변수에 새로운 메세지를 append하는 작업이 계속 되고 있으면서 동시에, 클래스의 다른 메소드인 get_and_clear_messages를 실행하였다. 즉 다시말해서 다른 함수가 계속해서 새로운 '쓰기'작업을 하는 중에 이와는 또 다른 함수가 똑같은 변수에 접근하여 '읽기'를 수행할 수 있는 것. 이는 바로, 강화학습의 Q-learning 기법에서 많이 쓰이는 Replay Buffer를 여러 agent가 공유하고, 쓰기와 읽기를 동시에 할 수 있음을 시사한다. 클래스의 다른 메소드를 동시에 실행하는 것까지는 놀라운 일이 아닐지라도, self.messege 라는 변수를 서로 다른 함수끼리 공유하면서 지속적으로 append하고 accessing하는 것을 실시간으로, 비교적 간단한 문법으로 가능하다는 것은 큰 장점이라고 볼 수 있다. ###Code import numpy as np from collections import defaultdict num_of_workers = 4 @ray.remote class StreamingPrefixCount(object): def __init__(self): self.prefix_count = defaultdict(int) self.popular_prefixes = set() def add_document(self, document): for word in document: for i in range(1, len(word)): prefix = word[:i] self.prefix_count[prefix] += 1 if self.prefix_count[prefix] > 3: self.popular_prefixes.add(prefix) def get_popular(self): return self.popular_prefixes streaming_actors = [StreamingPrefixCount.remote() for _ in range(num_of_workers)] tic = time.time() for i in range(num_of_workers * 10): document = [np.random.bytes(20) for _ in range(30000)] streaming_actors[i % num_of_workers].add_document.remote(document) results = ray.get([actor.get_popular.remote() for actor in streaming_actors]) popular_prefixes = set() for prefixes in results: popular_prefixes |= prefixes print("걸린시간(s): ", time.time() - tic) print(popular_prefixes) from collections import defaultdict num_of_workers = 4 class StreamingPrefixCount(object): def __init__(self): self.prefix_count = defaultdict(int) self.popular_prefixes = set() def add_document(self, document): for word in document: for i in range(1, len(word)): prefix = word[:i] self.prefix_count[prefix] += 1 if self.prefix_count[prefix] > 3: self.popular_prefixes.add(prefix) def get_popular(self): return self.popular_prefixes streaming_actors = [StreamingPrefixCount() for _ in range(num_of_workers)] tic = time.time() for i in range(num_of_workers * 10): document = [np.random.bytes(20) for _ in range(30000)] streaming_actors[i % num_of_workers].add_document(document) results = [actor.get_popular() for actor in streaming_actors] popular_prefixes = set() for prefixes in results: popular_prefixes |= prefixes print("걸린시간(s): ", time.time() - tic) print(popular_prefixes) ###Output _____no_output_____ ###Markdown 딥러닝 모델의 평가도 Ray로 할 수 있다. ###Code import psutil import ray import sys import tensorflow as tf num_cpus = psutil.cpu_count(logical=False) ray.init(num_cpus=num_cpus) filename = '/tmp/model' @ray.remote class Model(object): def __init__(self, i): # Pin the actor to a specific core if we are on Linux to prevent # contention between the different actors since TensorFlow uses # multiple threads. if sys.platform == 'linux': psutil.Process().cpu_affinity([i]) # Load the model and some data. self.model = tf.keras.models.load_model(filename) mnist = tf.keras.datasets.mnist.load_data() self.x_test = mnist[1][0] / 255.0 def evaluate_next_batch(self): # Note that we reuse the same data over and over, but in a # real application, the data would be different each time. return self.model.predict(self.x_test) actors = [Model.remote(i) for i in range(num_cpus)] # Parallelize the evaluation of some test data. for j in range(10): results = ray.get([actor.evaluate_next_batch.remote() for actor in actors]) ###Output _____no_output_____ ###Markdown 특징 위의 for문을 보면, actor가 총 10번 선언이 되는 것을 볼 수 있습니다. 일반적인 상황이라면 10번 model을 load했을텐데, Ray는 액터의 생성자에서 모델을 한 번만 로드하여 이 모델을 actor끼리 공유합니다. ###Code # Python 기본 패키지인 multiprocessing 패키지를 이용한 것 from multiprocessing import Pool import psutil import sys import tensorflow as tf num_cpus = psutil.cpu_count(logical=False) filename = '/tmp/model' def evaluate_next_batch(i): # Pin the process to a specific core if we are on Linux to prevent # contention between the different processes since TensorFlow uses # multiple threads. if sys.platform == 'linux': psutil.Process().cpu_affinity([i]) model = tf.keras.models.load_model(filename) mnist = tf.keras.datasets.mnist.load_data() x_test = mnist[1][0] / 255.0 return model.predict(x_test) pool = Pool(num_cpus) for _ in range(10): pool.map(evaluate_next_batch, range(num_cpus)) ###Output _____no_output_____
4_synthetic_data_attention/Error_modes/distribution_2/m_2/distribution_2_m_2.ipynb
###Markdown Generate dataset ###Code y = np.random.randint(0,4,1200) idx= [] for i in range(4): print(i,sum(y==i)) idx.append(y==i) x = np.zeros((1200,2)) x[idx[0],:] = np.random.uniform(low=[5,2],high=[6,4],size=(sum(idx[0]),2)) x[idx[1],:] = np.random.uniform(low=[5,-3],high=[6,-5],size=(sum(idx[1]),2)) x[idx[2],:] = np.random.uniform(low=[-2,0],high=[-3,-2],size=(sum(idx[2]),2)) x[idx[3],:] = np.random.uniform(low=[1,-8],high=[2,5],size=(sum(idx[3]),2)) # x[idx[0],:] = np.random.multivariate_normal(mean = [5,5],cov=[[0.1,0],[0,0.1]],size=sum(idx[0])) # x[idx[1],:] = np.random.multivariate_normal(mean = [-6,7],cov=[[0.1,0],[0,0.1]],size=sum(idx[1])) # x[idx[2],:] = np.random.multivariate_normal(mean = [-5,-4],cov=[[0.1,0],[0,0.1]],size=sum(idx[2])) # x[idx[0],:] = np.random.multivariate_normal(mean = [5.5,4],cov=[[0.1,0],[0,0.1]],size=sum(idx[0])) # x[idx[1],:] = np.random.multivariate_normal(mean = [6,6.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[1])) # x[idx[2],:] = np.random.multivariate_normal(mean = [4,6],cov=[[0.1,0],[0,0.1]],size=sum(idx[2])) # x[idx[3],:] = np.random.multivariate_normal(mean = [-1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[3])) # x[idx[4],:] = np.random.multivariate_normal(mean = [0,2],cov=[[0.1,0],[0,0.1]],size=sum(idx[4])) # x[idx[5],:] = np.random.multivariate_normal(mean = [1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[5])) # x[idx[6],:] = np.random.multivariate_normal(mean = [0,-1],cov=[[0.1,0],[0,0.1]],size=sum(idx[6])) # x[idx[7],:] = np.random.multivariate_normal(mean = [0,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[7])) # x[idx[8],:] = np.random.multivariate_normal(mean = [-0.5,-0.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[8])) # x[idx[9],:] = np.random.multivariate_normal(mean = [0.4,0.2],cov=[[0.1,0],[0,0.1]],size=sum(idx[9])) for i in range(4): plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i)) plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.savefig("type3_2_dist.png",bbox_inches="tight") plt.savefig("type3_2_dist.pdf",bbox_inches="tight") foreground_classes = {'class_0','class_1', 'class_2'} background_classes = {'class_3'} fg_class = np.random.randint(0,3) fg_idx = np.random.randint(0,2) #m=2 a = [] for i in range(2): #m=2 if i == fg_idx: b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1) a.append(x[b]) print("foreground "+str(fg_class)+" present at " + str(fg_idx)) else: bg_class = np.random.randint(3,4) b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1) a.append(x[b]) print("background "+str(bg_class)+" present at " + str(i)) a = np.concatenate(a,axis=0) print(a.shape) print(fg_class , fg_idx) a.shape np.reshape(a,(4,1)) desired_num = 3000 mosaic_list =[] mosaic_label = [] fore_idx=[] for j in range(desired_num): fg_class = np.random.randint(0,3) fg_idx = np.random.randint(0,2) #m=2 a = [] for i in range(2): #m=2 if i == fg_idx: b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1) a.append(x[b]) # print("foreground "+str(fg_class)+" present at " + str(fg_idx)) else: bg_class = np.random.randint(3,4) b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1) a.append(x[b]) # print("background "+str(bg_class)+" present at " + str(i)) a = np.concatenate(a,axis=0) mosaic_list.append(np.reshape(a,(4,1))) mosaic_label.append(fg_class) fore_idx.append(fg_idx) mosaic_list = np.concatenate(mosaic_list,axis=1).T # print(mosaic_list) print(np.shape(mosaic_label)) print(np.shape(fore_idx)) class MosaicDataset(Dataset): """MosaicDataset dataset.""" def __init__(self, mosaic_list, mosaic_label, fore_idx): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.mosaic = mosaic_list self.label = mosaic_label self.fore_idx = fore_idx def __len__(self): return len(self.label) def __getitem__(self, idx): return self.mosaic[idx] , self.label[idx], self.fore_idx[idx] batch = 250 msd = MosaicDataset(mosaic_list, mosaic_label , fore_idx) train_loader = DataLoader( msd,batch_size= batch ,shuffle=True) class Wherenet(nn.Module): def __init__(self): super(Wherenet,self).__init__() self.linear1 = nn.Linear(2,1) def forward(self,z): x = torch.zeros([batch,2],dtype=torch.float64) #m=2 y = torch.zeros([batch,2], dtype=torch.float64) #x,y = x.to("cuda"),y.to("cuda") for i in range(2): #m=9 x[:,i] = self.helper(z[:,2*i:2*i+2])[:,0] #print(k[:,0].shape,x[:,i].shape) x = F.softmax(x,dim=1) # alphas x1 = x[:,0] for i in range(2): #m=2 x1 = x[:,i] #print() y = y+torch.mul(x1[:,None],z[:,2*i:2*i+2]) return y , x def helper(self,x): #x = F.relu(self.linear1(x)) #x = F.relu(self.linear2(x)) x = self.linear1(x) return x trainiter = iter(train_loader) input1,labels1,index1 = trainiter.next() where = Wherenet().double() where = where out_where,alphas = where(input1) out_where.shape,alphas.shape class Whatnet(nn.Module): def __init__(self): super(Whatnet,self).__init__() self.linear1 = nn.Linear(2,3) #self.linear2 = nn.Linear(4,3) # self.linear3 = nn.Linear(8,3) def forward(self,x): #x = F.relu(self.linear1(x)) #x = F.relu(self.linear2(x)) x = self.linear1(x) return x what = Whatnet().double() # what(out_where) test_data_required = 1000 mosaic_list_test =[] mosaic_label_test = [] fore_idx_test=[] for j in range(test_data_required): fg_class = np.random.randint(0,3) fg_idx = np.random.randint(0,2) #m=2 a = [] for i in range(2): #m=2 if i == fg_idx: b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1) a.append(x[b]) # print("foreground "+str(fg_class)+" present at " + str(fg_idx)) else: bg_class = np.random.randint(3,4) b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1) a.append(x[b]) # print("background "+str(bg_class)+" present at " + str(i)) a = np.concatenate(a,axis=0) mosaic_list_test.append(np.reshape(a,(4,1))) mosaic_label_test.append(fg_class) fore_idx_test.append(fg_idx) mosaic_list_test = np.concatenate(mosaic_list_test,axis=1).T print(mosaic_list_test.shape) test_data = MosaicDataset(mosaic_list_test,mosaic_label_test,fore_idx_test) test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False) focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 col1=[] col2=[] col3=[] col4=[] col5=[] col6=[] col7=[] col8=[] col9=[] col10=[] col11=[] col12=[] col13=[] criterion = nn.CrossEntropyLoss() optimizer_where = optim.SGD(where.parameters(), lr=0.01, momentum=0.9) optimizer_what = optim.SGD(what.parameters(), lr=0.01, momentum=0.9) nos_epochs = 100 train_loss=[] test_loss =[] train_acc = [] test_acc = [] for epoch in range(nos_epochs): # loop over the dataset multiple times focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 running_loss = 0.0 cnt=0 iteration = desired_num // batch #training data set for i, data in enumerate(train_loader): inputs , labels , fore_idx = data #inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device) # zero the parameter gradients optimizer_what.zero_grad() optimizer_where.zero_grad() avg_inp,alphas = where(inputs) outputs = what(avg_inp) _, predicted = torch.max(outputs.data, 1) loss = criterion(outputs, labels) loss.backward() optimizer_what.step() optimizer_where.step() running_loss += loss.item() if cnt % 6 == 5: # print every 6 mini-batches print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / 6)) running_loss = 0.0 cnt=cnt+1 if epoch % 5 == 4: for j in range (batch): focus = torch.argmax(alphas[j]) if(alphas[j][focus] >= 0.5): argmax_more_than_half +=1 else: argmax_less_than_half +=1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true +=1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false +=1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false +=1 if epoch % 5 == 4: col1.append(epoch) col2.append(argmax_more_than_half) col3.append(argmax_less_than_half) col4.append(focus_true_pred_true) col5.append(focus_false_pred_true) col6.append(focus_true_pred_false) col7.append(focus_false_pred_false) #************************************************************************ #testing data set with torch.no_grad(): focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 for data in test_loader: inputs, labels , fore_idx = data #inputs,labels,fore_idx = inputs.to(device),labels.to(device),fore_idx.to(device) # print(inputs.shtorch.save(where.state_dict(),"model_epoch"+str(epoch)+".pt")ape,labels.shape) avg_inp,alphas = where(inputs) outputs = what(avg_inp) _, predicted = torch.max(outputs.data, 1) for j in range (batch): focus = torch.argmax(alphas[j]) if(alphas[j][focus] >= 0.5): argmax_more_than_half +=1 else: argmax_less_than_half +=1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true +=1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false +=1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false +=1 col8.append(argmax_more_than_half) col9.append(argmax_less_than_half) col10.append(focus_true_pred_true) col11.append(focus_false_pred_true) col12.append(focus_true_pred_false) col13.append(focus_false_pred_false) #torch.save(where.state_dict(),"where_model_epoch"+str(epoch)+".pt") #torch.save(what.state_dict(),"what_model_epoch"+str(epoch)+".pt") print('Finished Training') #torch.save(where.state_dict(),"where_model_epoch"+str(nos_epochs)+".pt") #torch.save(what.state_dict(),"what_model_epoch"+str(epoch)+".pt") columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ] df_train = pd.DataFrame() df_test = pd.DataFrame() df_train[columns[0]] = col1 df_train[columns[1]] = col2 df_train[columns[2]] = col3 df_train[columns[3]] = col4 df_train[columns[4]] = col5 df_train[columns[5]] = col6 df_train[columns[6]] = col7 df_test[columns[0]] = col1 df_test[columns[1]] = col8 df_test[columns[2]] = col9 df_test[columns[3]] = col10 df_test[columns[4]] = col11 df_test[columns[5]] = col12 df_test[columns[6]] = col13 df_train plt.plot(col1,col2, label='argmax > 0.5') plt.plot(col1,col3, label='argmax < 0.5') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("training data") plt.title("On Training set") plt.show() plt.plot(col1,col4, label ="focus_true_pred_true ") plt.plot(col1,col5, label ="focus_false_pred_true ") plt.plot(col1,col6, label ="focus_true_pred_false ") plt.plot(col1,col7, label ="focus_false_pred_false ") plt.title("On Training set") plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("training data") plt.savefig("linear_type3_21.png",bbox_inches="tight") plt.savefig("linear_type3_21.pdf",bbox_inches="tight") plt.show() df_test plt.plot(col1,col8, label='argmax > 0.5') plt.plot(col1,col9, label='argmax < 0.5') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("Testing data") plt.title("On Testing set") plt.show() plt.plot(col1,col10, label ="focus_true_pred_true ") plt.plot(col1,col11, label ="focus_false_pred_true ") plt.plot(col1,col12, label ="focus_true_pred_false ") plt.plot(col1,col13, label ="focus_false_pred_false ") plt.title("On Testing set") plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("Testing data") plt.show() # where.state_dict()["linear1.weight"][:] = torch.Tensor(np.array([[ 0, -1]])) # where.state_dict()["linear1.bias"][:] = torch.Tensor(np.array([0])) for param in where.named_parameters(): print(param) # what.state_dict()["linear1.weight"][:] = torch.Tensor(np.array([[ 5, 0], # [0,5], # [ 0, 0]])) # what.state_dict()["linear1.bias"][:] = torch.Tensor(np.array([0, 0, 0])) for param in what.named_parameters(): print(param) xx,yy= np.meshgrid(np.arange(-4,10,0.05),np.arange(-8,6,0.05)) X = np.concatenate((xx.reshape(-1,1),yy.reshape(-1,1)),axis=1) X = torch.Tensor(X).double() Y = where.helper(X) Y1 = what(X) X.shape,Y.shape X = X.detach().numpy() Y = Y[:,0].detach().numpy() fig = plt.figure(figsize=(6,6)) cs = plt.contourf(X[:,0].reshape(xx.shape),X[:,1].reshape(yy.shape),Y.reshape(xx.shape)) plt.xlabel("X1") plt.ylabel("X2") fig.colorbar(cs) for i in range(4): plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i)) plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.savefig("focus_contour.png")#,bbox_inches='tight') Y1 = Y1.detach().numpy() Y1 = torch.softmax(torch.Tensor(Y1),dim=1) _,Z4= torch.max(Y1,1) Z1 = Y1[:,0] Z2 = Y1[:,1] Z3 = Y1[:,2] Z4 #fig = plt.figure(figsize=(6,6)) # plt.scatter(X[:,0],X[:,1],c=Z1) # plt.scatter(X[:,0],X[:,1],c=Z2) # plt.scatter(X[:,0],X[:,1],c=Z3) #cs = plt.contourf(X[:,0].reshape(xx.shape),X[:,1].reshape(yy.shape),Z1.reshape(xx.shape)) # #plt.colorbar(cs) # cs = plt.contourf(X[:,0].reshape(xx.shape),X[:,1].reshape(yy.shape),Z2.reshape(xx.shape)) # #plt.colorbar(cs) # cs = plt.contourf(X[:,0].reshape(xx.shape),X[:,1].reshape(yy.shape),Z3.reshape(xx.shape)) #plt.colorbar(cs) # plt.xlabel("X1") # plt.ylabel("X2") #ax.view_init(60,100) #plt.savefig("non_interpretable_class_2d.pdf",bbox_inches='tight') avrg = [] with torch.no_grad(): for i, data in enumerate(train_loader): inputs , labels , fore_idx = data avg_inp,alphas = where(inputs) avrg.append(avg_inp) avrg= np.concatenate(avrg,axis=0) plt.scatter(X[:,0],X[:,1],c=Z4) for i in range(4): plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i)) plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.scatter(avrg[:,0],avrg[:,1]) plt.savefig("decision_boundary.png",bbox_inches="tight") true = [] pred = [] acc= 0 for i, data in enumerate(train_loader): inputs , labels , fore_idx = data avg_inp,alphas = where(inputs) outputs = what(avg_inp) _, predicted = torch.max(outputs.data, 1) true.append(labels) pred.append(predicted) acc+=sum(predicted == labels) true = np.concatenate(true,axis=0) pred = np.concatenate(pred,axis=0) from sklearn.metrics import confusion_matrix confusion_matrix(true,pred) ###Output _____no_output_____
1_Basics/1_Graph_and_Session.ipynb
###Markdown Introduction The first step to learn Tensorflow is to understand its main key feature, the __"computational graph"__ approach. Basically, all Tensorflow codes contain two important parts:__Part 1:__ building the __GRAPH__ which represents the data flow of the computations__Part 2:__ running a __SESSION__ which executes the operations in the graphIn fact, TensorFlow separates the definition of computations from their execution. These two parts are explained in more details in the following sections. Before that, remember that the first step is to import the Tensorflow library! ###Code import tensorflow as tf ###Output _____no_output_____ ###Markdown This gives Python access to all of TensorFlow's classes, methods, and symbols. Using this command, TensorFlow library will be imported under the alias __tf__ so that later we can use it instead of typing the whole term __tensorflow__ each time.__What is a Tensor?__TensorFlow programs use a tensor data structure to represent all data. Tensor is a multi-dimensional array (0-D tensor: scalar, 1-D tensor: vector, 2-D tensor: matrix, and so on). So TensorFlow is simply referring to the flow of the Tensors in the Graph.___Fig1. ___ A sample computational graph in TensorFlow GRAPHThe biggest idea of all of the big ideas about Tensorflow is that numeric computation is expressed as a computational graph. In other words, the backbone of any Tensorflow program is going to be a __Graph__. As mentioned on the TensorFlow website, "A __computational graph__ (or graph in short) is a series of TensorFlow operations arranged into a graph of nodes".So First let's see what does a node and operation mean?! The best way to explain it is by looking at a simple example. Suppose we want to write the code for function $f(x,y)=x^2y+y+2$. The Graph in TensorFlow will be something like:___Fig2. ___ Schematic of the constructed computational graph in TensorFlowAs it can be seen, the graph is composed of a series of nodes connected to each other by edges. Each __node__ in the graph is called __op__ (short for operation). So we'll have one node for each operation; either for operations on tensors (like math operations) or generating tensors (like variables and constants). Each node takes zero or more tensors as inputs and produces a tensor as an output.Now Let's build a simple computational graph. Example 1:Let's add two values, say a=2 and b=3, using TensorFlow. To do so, we need to call __tf.add()__. To see how it works and how many arguments it needs, simply Google it and check the first link (TensorFlow official documentation). The documentation says it can be used like __tf.add(x, y, name=None)__ where x and y are the values to be added together and __name__ is the operation name, i.e. the name associated to the addition node on the graph.If we call the operation __"Add"__, the code will be as follows: ###Code import tensorflow as tf a = 2 b = 3 c = tf.add(a, b, name='Add') print(c) ###Output Tensor("Add:0", shape=(), dtype=int32) ###Markdown The generated graph and variables are:__*Note__: The graph is generated using __Tensorboard__ which is a visualization tool comes with TensorFlow. We'll learn more about it in the next tutorials.___Fig3. ___ __Left:__ generated graph visualized in Tensorboard, __Right:__ generated variables (screenshot captured from PyCharm debugger when running in debug mode)This code creates two input nodes (for inputs a=2 and b=3) and one output node for the addition operation (named Add). As you see, when we print out the variable __c__ (i.e. the output Tensor of the addition operation), it prints out the Tensor information; its name (Add), shape (__()__ means scalar), and type (32-bit integer). However, It does not spit out the result (2+3=5). Why?!To actually evaluate the nodes, we must run the computational graph within a __Session__. In simple words, the written code only generates the graph which only determines the expected sizes of Tensors and operations to be executed on them. However, it doesn't assign a numeric value to any of the Tensors. To assign these values and make them flow through the graph, we need to run a session.Therefore a TensorFlow Graph is something like a function definition in Python. It __WILL NOT__ do any computation for you (just like a function definition will not have any execution result). It __ONLY__ defines computation operations. SessionTo compute anything, a graph must be launched in a session. Technically speaking, session places the graph ops onto Devices, such as CPUs or GPUs, and provides methods to execute them. In our simple example, to run the graph and get the value for c: ###Code sess = tf.Session() print(sess.run(c)) sess.close() ###Output 5 ###Markdown This code creates a Session object (assigned to __sess__), and then (the second line) invokes its run method to run enough of the computational graph to evaluate __c__. This means that it only runs that part of the graph which is necessary to get the value of __c__ (in this simple example, it runs the whole graph). The last line closes the session. The following code does the same thing and is more commonly used. The only difference is that there is no need to close the session at the end as it gets closed automatically. ###Code with tf.Session() as sess: print(sess.run(c)) ###Output 5 ###Markdown Now let's look at the created graph one more time. Don't you see anything weird?___Fig4. ___ The generated graph visualized by TensorboardExactly! What is x and y?! We didn't define any x or y variable!Well... To explain the reason more clearly, let's make up two names; say __"Python-name"__ and __"TensorFlow-name"__. In this piece of code, we generated 3 variables (look at the right panel of Fig. 3) with __"Python-name"__s of _a_, _b_, and _c_. Here, _a_ and _b_ are Python variables, thus have no __"TensorFlow-name"__; while _c_ is a Tensor with ___Add___ as its __"TensorFlow-name"__. Clear? Okay, let's get back to our question, what is x and y then?In an ideal Tensorflow case, __tf.add()__ receives two __Tensors__ with defined __"TensorFlow-name"__ as input (These names are separate from __Python-name__). For example, by writing $c = tf.add(a, b, name='Add')$, we're actually creating a variable (or Tensor) with __c__ as its Python-name and __Add__ as the TensorFlow-name. In the above code, we passed two Python variables (a=2 and b=3) which only have Python-names (a and b), but no TensorFlow-name. TensorFlow uses the TensorFlow-names for visualizing the graphs. Since a and b have no TensorFlow-name, it uses some default names, x and y. __*Note:__ This name mismatch can easily be solved by using tf.constant() for creating the input variables as Tensors instead of simply using Python variables (a=2, b=3). This is explained thoroughly in the next tutorial where we talk about TensorFlow DataTypes. For now, we'll continue using Python variables and change the Python variable names __a__ and __b__ into __x__ and __y__ to solve the name mismatch temporarily. Now let's look at a more complicated example. Example 2:Creating a graph with multiple math operations ###Code import tensorflow as tf x = 2 y = 3 add_op = tf.add(x, y, name='Add') mul_op = tf.multiply(x, y, name='Multiply') pow_op = tf.pow(add_op, mul_op, name='Power') useless_op = tf.multiply(x, add_op, name='Useless') with tf.Session() as sess: pow_out, useless_out = sess.run([pow_op, useless_op]) ###Output _____no_output_____ ###Markdown Introduction Why do we need tensorflow? Why are people crazy about it? In a way, it is lazy computing and offers flexibility in the way you run your code. What is this thing with flexbility and laze computing? We are glad, you asked!Lazy Computing: TensorFlow is a way of representing computation without actually performing it until asked. The first step to learn Tensorflow is to understand its main key feature, the __"computational graph"__ approach. Basically, all Tensorflow codes contain two important parts:__Part 1:__ building the __GRAPH__, it represents the data flow of the computations__Part 2:__ running a __SESSION__, it executes the operations in the graphFirst step you create the graph i.e. what you want to do with the data, then you run it seperately using a session (don't struggle to wrap your head around it, it will come to you eventually). Flexibility: When you create a graph, you are not bound to run the whole graph and can control the parts of the graph that are executed separately. This provides a huge flexibility with your models. Bonus: One of the biggest advantages of TensorFlow is its visualizations of the computation graph. Its called TensorBoard and will be discussed in future. Now that we have discussed what and why about TensorFlow, lets dive in to the actual thing.TensorFlow separates the definition of computations from their execution. These two parts are explained in more detail in the following sections. Before that, remember that the first step is to import the Tensorflow library! ###Code import tensorflow as tf ###Output _____no_output_____ ###Markdown This gives Python access to all of TensorFlow's classes, methods, and symbols. Using this command, TensorFlow library will be imported under the alias __tf__ so that later we can use it instead of typing the whole term __tensorflow__ each time.__What is a Tensor?__TensorFlow programs use a data structure called tensor to represent all the data. Any type of data you plan to use for your model can be stored in Tensors. Simply put, a Tensor is a multi-dimensional array (0-D tensor: scalar, 1-D tensor: vector, 2-D tensor: matrix, and so on). Hence, TensorFlow is simply referring to the flow of the Tensors in the computational graph.___Fig1. ___ A sample computational graph in TensorFlow GRAPHThe biggest idea about Tensorflow is that all the numerical computations are expressed as a computational graph. In other words, the backbone of any Tensorflow program is a __Graph__. Anything that happens in your model is represented by the computational graph. This makes it, the to go place for anything related to your model. Quoted from the TensorFlow website, "A __computational graph__ (or graph in short) is a series of TensorFlow operations arranged into a graph of nodes". Basically, it means a graph is just an arrangement of nodes that represent the operations in your model. So First let's see what does a node and operation mean?! The best way to explain it is by looking at a simple example. Suppose we want to write the code for function $f(x,y)=x^2y+y+2$. The Graph in TensorFlow will be something like:___Fig2. ___ Schematic of the constructed computational graph in TensorFlowThe graph is composed of a series of nodes connected to each other by edges (from the image above). Each __node__ in the graph is called __op__ (short for operation). So we'll have one node for each operation; either for operations on tensors (like math operations) or generating tensors (like variables and constants). Each node takes zero or more tensors as inputs and produces a tensor as an output.Now Let's build a simple computational graph. Example 1:Let's start with a basic arithmatic operation like addition to demonstrate a graph. The code adds two values, say a=2 and b=3, using TensorFlow. To do so, we need to call __tf.add()__. From here on, we recommend you to check out the documentation of each method/class to get a clear idea of what it can do(documentation can be found at tensorflow.org or you can just use google to get to the required page in the documentation). The __tf.add()__ has three arugments 'x', 'y', and 'name' where x and y are the values to be added together and __name__ is the operation name, i.e. the name associated to the addition node on the graph.If we call the operation __"Add"__, the code will be as follows: ###Code import tensorflow as tf a = 2 b = 3 c = tf.add(a, b, name='Add') print(c) ###Output Tensor("Add:0", shape=(), dtype=int32) ###Markdown The generated graph and variables are:__*Note__: The graph is generated using __Tensorboard__. As discussed earlier, it is a visualization tool for the graph and will be discussed in detail in future.___Fig3. ___ __Left:__ generated graph visualized in Tensorboard, __Right:__ generated variables (screenshot captured from PyCharm debugger when running in debug mode)This code creates two input nodes (for inputs a=2 and b=3) and one output node for the addition operation (named Add). When we print out the variable __c__ (i.e. the output Tensor of the addition operation), it prints out the Tensor information; its name (Add), shape (__()__ means scalar), and type (32-bit integer). However, It does not spit out the result (2+3=5). Why?!Remember earlier in this post, we talked about the two parts of a TensorFlow code. First step is to create a graph and to actually evaluate the nodes, we must run the computational graph within a __Session__. In simple words, the written code only generates the graph which only determines the expected sizes of Tensors and operations to be executed on them. However, it doesn't assign a numeric value to any of the Tensors i.e. TensorFlow does not execute the graph unless it is specified to do so with a session. Hence, to assign these values and make them flow through the graph, we need to create and run a session.Therefore a TensorFlow Graph is something like a function definition in Python. It __WILL NOT__ do any computation for you (just like a function definition will not have any execution result). It __ONLY__ defines computation operations. SessionTo compute anything, a graph must be launched in a session. Technically, session places the graph ops on hardware such as CPUs or GPUs and provides methods to execute them. In our example, to run the graph and get the value for c the following code will create a session and execute the graph by running 'c': ###Code sess = tf.Session() print(sess.run(c)) sess.close() ###Output 5 ###Markdown This code creates a Session object (assigned to __sess__), and then (the second line) invokes its run method to run enough of the computational graph to evaluate __c__. This means that it only runs that part of the graph which is necessary to get the value of __c__ (remember the flexibility of using TensorFlow? In this simple example, it runs the whole graph). Remember to close the session at the end of the session. That is done using the last line in the above code. The following code does the same thing and is more commonly used. The only difference is that there is no need to close the session at the end as it gets closed automatically. ###Code with tf.Session() as sess: print(sess.run(c)) ###Output 5 ###Markdown Now let's look at the created graph one more time. Don't you see anything weird?___Fig4. ___ The generated graph visualized by TensorboardExactly! What is x and y?! Where did these two thing come from? We didn't define any x or y variables!Well... To explain clearly, let's make up two names; say __"Python-name"__ and __"TensorFlow-name"__. In this piece of code, we generated 3 variables (look at the right panel of Fig. 3) with __"Python-name"__s of _a_, _b_, and _c_. Here, _a_ and _b_ are Python variables, thus have no __"TensorFlow-name"__; while _c_ is a Tensor with ___Add___ as its __"TensorFlow-name"__. Clear? Okay, let's get back to our question, what is x and y then?In an ideal Tensorflow case, __tf.add()__ receives two __Tensors__ with defined __"TensorFlow-name"__ as input (these names are separate from __Python-name__). For example, by writing $c = tf.add(a, b, name='Add')$, we're actually creating a variable (or Tensor) with __c__ as its Python-name and __Add__ as the TensorFlow-name. In the above code, we passed two Python variables (a=2 and b=3) which only have Python-names (a and b), but they have no TensorFlow-names. TensorFlow uses the TensorFlow-names for visualizing the graphs. Since a and b have no TensorFlow-names, it uses some default names, x and y. __*Note:__ This name mismatch can easily be solved by using tf.constant() for creating the input variables as Tensors instead of simply using Python variables (a=2, b=3). This is explained thoroughly in the next tutorial where we talk about TensorFlow DataTypes. For now, we'll continue using Python variables and change the Python variable names __a__ and __b__ into __x__ and __y__ to solve the name mismatch temporarily. Now let's look at a more complicated example. Example 2:Creating a graph with multiple math operations ###Code import tensorflow as tf x = 2 y = 3 add_op = tf.add(x, y, name='Add') mul_op = tf.multiply(x, y, name='Multiply') pow_op = tf.pow(add_op, mul_op, name='Power') useless_op = tf.multiply(x, add_op, name='Useless') with tf.Session() as sess: pow_out, useless_out = sess.run([pow_op, useless_op]) ###Output _____no_output_____
nbs/14_download_detect_crop_save.ipynb
###Markdown Download-Unzip & Detecting & Cropping Faces & Saving> - Download and Unzip Part File> - Detection is made on only original video for time saving> - While cropping fake images original face detections are used ###Code # default_exp face_detection.download_detect_crop_save #export from fastai.vision import * from dfdc.core.core import * from dfdc.core.download_unzip import * from dfdc.face_detection.generate_detections import * from dfdc.face_detection.save_cropped_faces import * ###Output _____no_output_____ ###Markdown Download and Unzip ###Code array(download_parts) download_part_no = "00" video_path = download_unzip(download_part_no) part_no = video_path.name.split("_")[-1]; part_no ###Output _____no_output_____ ###Markdown Detect Faces from Original Videos ###Code modelname = "mobilenet" data_path = Path("/home/ubuntu/data/dfdc/") video_path = Path(data_path/f"dfdc_train/dfdc_train_part_{part_no}/") len(video_path.ls()) metadf = read_metadata(get_files(video_path, extensions=['.json'], recurse=True)[0]) video_files_txt = video_path/"original_video_files.txt" _ = get_original_video_list(video_path, metadf, video_files_txt) dest_fname = data_path/f"dfdc_face_detections/part_{part_no}_retina_detections.csv" freq = 10 model_args = dict(confidence_threshold = 0.5, top_k = 5, nms_threshold = 0.5, keep_top_k = 5) df = generate_face_detections(video_files_txt, freq, "mobilenet") df.head() ###Output _____no_output_____ ###Markdown Crop and Save Faces for All Videos ###Code metadf["source"] = metadf.apply(lambda o: o['original'] if type(o['original']) == str else o['fname'], axis=1) df = df.rename(columns={"fname":"source"}) face_detections_df = metadf.merge(df, how='inner', on='source') face_detections_df.head() face_detections_csv = data_path/f'dfdc_face_detections/part_{part_no}_retina_detections.csv' face_detections_df.to_csv(face_detections_csv, index=False) len(set(face_detections_df.fname)) save_cropped_faces(data_path, video_path, face_detections_csv) ###Output _____no_output_____ ###Markdown Remove videos ###Code shutil.rmtree(video_path) ###Output _____no_output_____ ###Markdown Combine All ###Code array(download_parts) #export def download_detect_crop_save(download_part_no): # Download and Unzip print(f"Downloading part {download_part_no}") video_path = download_unzip(download_part_no) part_no = video_path.name.split("_")[-1] # Detect Faces from Original Videos modelname = "mobilenet" data_path = Path("/home/ubuntu/data/dfdc/") video_path = Path(data_path/f"dfdc_train/dfdc_train_part_{part_no}/") metadf = read_metadata(get_files(video_path, extensions=['.json'], recurse=True)[0]) video_files_txt = video_path/"original_video_files.txt" _ = get_original_video_list(video_path, metadf, video_files_txt) dest_fname = data_path/f"dfdc_face_detections/part_{part_no}_retina_detections.csv" freq = 10 model_args = dict(confidence_threshold = 0.5, top_k = 5, nms_threshold = 0.5, keep_top_k = 5) df = generate_face_detections(video_files_txt, freq, "mobilenet") # Crop and Save Faces for All Videos metadf["source"] = metadf.apply(lambda o: o['original'] if type(o['original']) == str else o['fname'], axis=1) df = df.rename(columns={"fname":"source"}) face_detections_df = metadf.merge(df, how='inner', on='source') # Keep only existing videos (video dir missing videos which is in metadata) video_files = get_files(video_path, extensions=['.mp4'], recurse=True) video_file_fnames = [o.name for o in video_files] face_detections_df = face_detections_df[face_detections_df.fname.isin(video_file_fnames)].reset_index(drop=True) # Do crop and save face_detections_csv = data_path/f'dfdc_face_detections/part_{part_no}_retina_detections.csv' face_detections_df.to_csv(face_detections_csv, index=False) save_cropped_faces(data_path, video_path, face_detections_csv) # Remove videos shutil.rmtree(video_path) ###Output _____no_output_____ ###Markdown Visualize ###Code crop_path = Path(f'/home/ubuntu/data/dfdc/dfdc_cropped_faces/dfdc_train_part_{part_no}') len(crop_path.ls()) def show_crop_dir(crop_dir): il = ImageList.from_folder(crop_dir) n = int(np.ceil(np.sqrt(len(il.items)))); n axes = subplots(n,n).flatten() for img, ax in zip(il, axes): img.show(ax=ax) rand_src = np.random.choice(list(set(metadf.source))) orig_fname = metadf[metadf.fname == rand_src].fname.values[0] fnames = metadf[metadf.source == rand_src].fname.values crop_dir = (crop_path/Path(orig_fname).stem); print(crop_dir) show_crop_dir(crop_dir) rand_fname = Path(np.random.choice(fnames)) crop_dir = (crop_path/rand_fname.stem); print(crop_dir) show_crop_dir(crop_dir) from dfdc.core.video_core import * play_video(video_path/f'{orig_fname}', video_path/f'{rand_fname}') ###Output _____no_output_____ ###Markdown export ###Code from nbdev.export import notebook2script notebook2script() ###Output Converted 001 - extract_faces.ipynb. Converted 002 - face_detection_retinaface.ipynb. Converted 003 - save_face_crops.ipynb. Converted 004 - tl_baseline.ipynb. Converted 00_core.ipynb. Converted 01_video_core.ipynb. Converted 02_download_unzip_files.ipynb. Converted 10_bbox_utils.ipynb. Converted 11_retinaface_detection.ipynb. Converted 12_generate_face_detections.ipynb. Converted 13_save_cropped_faces.ipynb. Converted 14_download_detect_crop_save.ipynb. Converted 15_extract_all.ipynb. Converted 20_datasets.ipynb. Converted 21_single_frame_model.ipynb. Converted index.ipynb. Converted inspect original fake pairs for face detection.ipynb. Converted run_commands.ipynb.
tpcx_bb/data-conversion.ipynb
###Markdown Connect to existing Dask CUDA Cluster ###Code from dask.distributed import Client cluster_ip = 'YOUR_DASK_SCHEDULER_IP' client = Client(f'ucx://{cluster_ip}:8786') client import dask_cudf tpcx_bb_home = 'YOUR_TPCX_BB_REPO_LOCATION' spark_schema_dir = f'{tpcx_bb_home}/tpcx_bb/spark_table_schemas/' # Spark uses different names for column types, and RAPIDS doesn't yet support Decimal types. def get_schema(table): with open(f'{spark_schema_dir}{table}.schema') as fp: schema = fp.read() names = [line.replace(',', '').split()[0] for line in schema.split('\n')] types = [line.replace(',', '').split()[1].replace('bigint', 'int').replace('string', 'str') for line in schema.split('\n')] types = [col_type.split('(')[0].replace('decimal', 'float') for col_type in types] return names, types def read_csv_table(table, chunksize='256 MiB'): # build dict of dtypes to use when reading CSV names, types = get_schema(table) dtype = {names[i]: types[i] for i in range(0, len(names))} base_path = f'{data_dir}/data/{table}' files = os.listdir(base_path) # item_marketprices has "audit" files that should be excluded if table == 'item_marketprices': paths = [f'{base_path}/{fn}' for fn in files if 'audit' not in fn and os.path.getsize(f'{base_path}/{fn}') > 0] base_path = f'{data_dir}/data_refresh/{table}' paths = paths + [f'{base_path}/{fn}' for fn in os.listdir(base_path) if 'audit' not in fn and os.path.getsize(f'{base_path}/{fn}') > 0] df = dask_cudf.read_csv(paths, sep='|', names=names, dtype=dtype, chunksize=chunksize, quoting=3) else: paths = [f'{base_path}/{fn}' for fn in files if os.path.getsize(f'{base_path}/{fn}') > 0] if table in refresh_tables: base_path = f'{data_dir}/data_refresh/{table}' paths = paths + [f'{base_path}/{fn}' for fn in os.listdir(base_path) if os.path.getsize(f'{base_path}/{fn}') > 0] df = dask_cudf.read_csv(paths, sep='|', names=names, dtype=types, chunksize=chunksize, quoting=3) return df import os, subprocess, math def multiplier(unit): if unit == 'G': return 1 elif unit == 'T': return 1000 else: return 0 # we use size of the CSV data on disk to determine number of Parquet partitions def get_size_gb(table): path = data_dir + 'data/'+table size = subprocess.check_output(['du','-sh', path]).split()[0].decode('utf-8') unit = size[-1] size = math.ceil(float(size[:-1])) * multiplier(unit) if table in refresh_tables: path = data_dir + 'data_refresh/'+table refresh_size = subprocess.check_output(['du','-sh', path]).split()[0].decode('utf-8') size = size + math.ceil(float(refresh_size[:-1])) * multiplier(refresh_size[-1]) return size def repartition(table, outdir, npartitions=None, chunksize=None, compression='snappy'): size = get_size_gb(table) if npartitions is None: npartitions = max(1, size) print(f'Converting {table} of {size} GB to {npartitions} parquet files, chunksize: {chunksize}') read_csv_table(table, chunksize).repartition(npartitions=npartitions).to_parquet(outdir+table, compression=compression) ###Output _____no_output_____ ###Markdown Generate list of tables to convert ###Code import os # these tables have extra data produced by bigbench dataGen refresh_tables = [ 'customer', 'customer_address', 'inventory', 'item', 'item_marketprices', 'product_reviews', 'store_returns', 'store_sales', 'web_clickstreams', 'web_returns', 'web_sales' ] tables = [table.split('.')[0] for table in os.listdir(spark_schema_dir)] ###Output _____no_output_____ ###Markdown Convert all tables to Parquet ###Code import time scale = 'sf10000' part_size = 3 chunksize = '128 MiB' # location of bigBench dataGen's CSV output data_dir = f'/mnt/weka/tpcx-bb/{scale}/' # location you want to write Parquet versions of the table data outdir = f'/mnt/weka/tpcx-bb/{scale}/parquet_{part_size}gb/' total = 0 for table in tables: size_gb = get_size_gb(table) # product_reviews has lengthy strings which exceed cudf's max number of characters per column # we use smaller partitions to avoid overflowing this character limit if table == 'product_reviews': npartitions = max(1, int(size_gb/1)) else: npartitions = max(1, int(size_gb/part_size)) t0 = time.time() repartition(table, outdir, npartitions, chunksize, compression='snappy') t1 = time.time() total = total + (t1-t0) print(f'{table} took {t1-t0} of {total}\n') print(f'{chunksize} took {total}s') ###Output _____no_output_____
notebooks/howto.ipynb
###Markdown HOWTO Guide for HSTCosmicrays (Beta version)1. Create `labeler.CosmicRayLabel()` and `metadata.GenerateMetadata()` objects to store all the necessary information in convenient containers2. Run the cosmic ray labeling algorithm on the input image two separate ways - By using the DQ array information after running a cosmic ray rejection routine - By defining a custom threshold value to use when generating a binary image3. Using the generated label, compute statistics for each one of the cosmic rays identified 4. Load in the pre-trained ML model for distinguishing cosmic rays from point sources and use it to classify the identified sources5. Use a more interesting dataset containing acutal an astrophysical source ###Code %matplotlib notebook import os import glob import pickle import sys sys.path.append('/Users/nmiles/hst_cosmic_rays/') import warnings warnings.simplefilter('ignore') # Import packages from the hst_cosmic_rays package from pipeline.label import labeler from pipeline.stat_utils import statshandler from pipeline.utils import metadata, initialize from astropy.io import fits from astropy.visualization import ImageNormalize, ZScaleInterval,\ SqrtStretch, LinearStretch, LogStretch import matplotlib.pyplot as plt from matplotlib.patches import Circle import numpy as np ###Output _____no_output_____ ###Markdown Define a convenience dictionary to make it simpler to determine the img stretch when displaying FITS files ###Code stretchdict_ = {'sqrt': SqrtStretch(), 'log': LogStretch(), 'linear':LinearStretch()} ###Output _____no_output_____ ###Markdown Setup a path to a directory containing the data you want to analyze ###Code data_path = '/Users/nmiles/hst_cosmic_rays/data/STIS/SAA_data' flist = glob.glob(os.path.join(data_path, 'o*flt.fits')) print('\n'.join(flist[:5])) ###Output _____no_output_____ ###Markdown Define a helper function to use for plotting images ###Code def plot(data, norm=None, stretch_type=None, vmin=None, vmax=None): """Simple plotting function""" if norm is None and stretch_type is None: pass elif norm is None and stretch_type is not None: if vmin is not None and vmax is not None: norm = ImageNormalize(data, stretch=stretchdict_[stretch_type],vmin=vmin, vmax=vmax) else: norm = ImageNormalize(data, stretch=stretchdict_[stretch_type], interval=ZScaleInterval()) fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(5,5)) im = ax.imshow(data, norm=norm, origin='lower', cmap='gray') plt.show() ###Output _____no_output_____ ###Markdown Now that we have a list of files that we want to examine and all our helper functions defined, we examine the two classes, `metadata.GenerateMetadata()` and `labeler.CosmicRayLabel`, from the `hstcosmicrays` package that we will be using. ###Code metadata.GenerateMetadata? labeler.CosmicRayLabel? instr = 'STIS_CCD' test_file = flist[0] ###Output _____no_output_____ ###Markdown The first thing we want to do is examine some metadata associated with our test file. The two attributes of the `GenerateMetadata` class used to store the relevant metadata are `instr_cfg` and `metadata` ###Code # Create a metadata object for our test file and instr img_meta = metadata.GenerateMetadata(fname=test_file, instr=instr) # Get the image metadata img_meta.get_image_data() # Get the observatory metadata from the SPT file img_meta.get_observatory_info() # Print out the extracted metadata and our instrument configuration print(f"Metadata extracted for {os.path.basename(test_file)}") print("-"*50) for key, val in img_meta.metadata.items(): print(f"{key} -> {val}") print("-"*50,'\n') print(f"Instrument Configuration for {instr}") print("-"*50) for key1, val1 in img_meta.instr_cfg.items(): if isinstance(val1, dict): print(f"{str(key1):}:") for key, val in img_meta.instr_cfg[key1].items(): print(f"{str(key):>25} -> {str(val):}") ###Output _____no_output_____ ###Markdown Ok, now that we have the required metadata read in, we create a `CosmicRayLabel` object. We use the gain keyword contained in the `instr_cfg` attribute of the `GenerateMetadata` object that we created above. This ensures that if the units of the input file are in DN or COUNTS, we convert it to ELECTRONS before proceeding. ###Code # Create an instance of the CosmicRayLabel class and read in SCI and DQ extensions cr_label = labeler.CosmicRayLabel( fname=test_files[0], gain_keyword=img_meta.instr_cfg['instr_params']['gain_keyword'] ) # Read in the sci data cr_label.get_data(extname='sci', extnums=[1]) # Read in the dq data cr_label.get_data(extname='dq', extnums=[1]) ###Output _____no_output_____ ###Markdown Now lets plot each extension that we just read in ###Code plot(cr_label.dq) plot(cr_label.sci, stretch_type='sqrt', vmin=0, vmax=20) ###Output _____no_output_____ ###Markdown There are two ways to label cosmic rays. 1. Using the DQ array to identify groups of pixels affected by the same CR2. Using a custom generated binary image to identify groups of pixels affected by the same CRThe cell blocks below demonstrate the first methodIf the images have been processed such that their DQ array contains BIT flag for identifiy cosmic ray affected pixels, then we run the labeling analysis with the following parameters defined below. ###Code dq_labeling_parameters = { 'use_dq': True, # Whether or not to use the DQ array 'dq_flag': 8192, # The BIT flag identifying CR (default 8192) 'do_bitwise_comp': True, # Do a BITWISE_AND comparison with dq_flag 'deblend': False, # If True, try to deblend (experimental, best to leave as False) 'threshold_l': 2, # Lower threshold for size of the labeled object to be consider a CR 'threshold_u': 5000, # Upper threshold for size of labeled object to be consider a CR 'pix_thresh': None, # Not used for using the DQ to label cosmic rays 'structure_element': np.ones((3,3)) # Structuring element to be used in labeling } ###Output _____no_output_____ ###Markdown Run the labeling analysis code for CCDs using the parameters defined above. Once the processing has finished, plot the `SCI` extension and the CR segmentation map side-by-side using the `plot()` method of the `CosmicRayLabel` class. ###Code cr_label.ccd_labeling(**dq_labeling_parameters) cr_label.plot(xlim=(200, 400), ylim=(200, 400)) ###Output _____no_output_____ ###Markdown If the image's DQ arry doesnt contain BIT flags for CRs, then you can instead run the labeling using a custom threshold (this is option 2. mentioned previously) ###Code threshold_labeling_parameters = { 'use_dq': False, 'dq_flag': None, 'do_bitwise_comp': False, 'deblend': False, # If True, try to deblend (experimental, best to leave as False) 'threshold_l': 2, # Lower threshold for size of the labeled object to be consider a CR 'threshold_u': 5000, # Upper threshold for size of labeled object to be consider a CR 'pix_thresh': 3*np.mean(cr_label.sci), # Set the absolute threshold to 3x the mean val 'structure_element': np.ones((3,3)) # Structuring element to be used in labeling } cr_label.ccd_labeling(**threshold_labeling_parameters) cr_label.plot(xlim=(200, 400), ylim=(200, 400)) ###Output _____no_output_____ ###Markdown Once we have the label in hand, we can start computing some statistics describing the identified cosmic rays using the `Stats` class in the `statshandler` module. As before, we inspect the class first to determine the proper inputs ###Code stats = statshandler.Stats? stats = statshandler.Stats cr_stats = statshandler.Stats(cr_label=cr_label, integration_time=img_meta.metadata['integration_time'], detector_size=img_meta.instr_cfg['instr_params']['detector_size']) cr_stats.compute_cr_statistics() ###Output _____no_output_____ ###Markdown Now we load the previously trained K-Nearest Neighbors classifier and use the model to predict whether the object identified are cosmic rays (1) or stars (0). Since this is a dark frame, the exptectation is for everything to be classified as a cosmic ray ###Code with open('knn_classifier_Oct03_2019.pkl', 'rb') as fd: clf = pickle.load(fd) predict = clf.predict(list(zip(cr_stats.size_in_sigmas, cr_stats.shapes))) predict.sum() == len(predict) ###Output _____no_output_____ ###Markdown Example 2: Identifying CRs in observations with external sourcesNow we are going to apply everything that we just used above, but instead we are going to analyze an image with actual sources and cosmic rays ###Code from astropy.convolution import Gaussian2DKernel from astropy.stats import gaussian_fwhm_to_sigma from photutils import detect_sources from photutils import detect_threshold from photutils import detect_sources datapath = '/Users/nmiles/hst_cosmic_rays/notebooks/MAST_2019-11-09T1348/HST/ocr7qvhaq' flist = glob.glob(datapath+'/*flt.fits') star_label = labeler.CosmicRayLabel(fname=flist[0]) star_label.get_data(extname='sci',extnums=[1]) plot(star_label.sci, stretch_type='log', vmin=0, vmax=1000) star_label.sci np.mean(star_label.sci) # sigma = 3.0 * gaussian_fwhm_to_sigma # FWHM = 3. # kernel = Gaussian2DKernel(sigma, x_size=3, y_size=3) # kernel.normalize() segm = detect_sources(star_label.sci, 10*np.median(star_label.sci) , npixels=8) threshold_labeling_parameters = { 'use_dq': False, 'dq_flag': None, 'do_bitwise_comp': False, 'deblend': False, # If True, try to deblend (experimental, best to leave as False) 'threshold_l': 2, # Lower threshold for size of the labeled object to be consider a CR 'threshold_u': 800, # Upper threshold for size of labeled object to be consider a CR 'pix_thresh': 10*np.mean(cr_label.sci), # Set the absolute threshold to 3x the mean val 'structure_element': np.ones((3,3)) # Structuring element to be used in labeling } star_label.ccd_labeling(**threshold_labeling_parameters) star_label.plot(instr='STIS/CCD') star_stats = statshandler.Stats( cr_label=star_label, integration_time=img_meta.metadata['integration_time'], detector_size = img_meta.instr_cfg['instr_params']['detector_size'] ) star_stats.compute_cr_statistics() classifications = clf.predict(list(zip(star_stats.size_in_sigmas, star_stats.shapes))) percentage_of_crs = np.sum(classifications)/ len(classifications) percentage_of_stars = len(np.where(classifications==0)[0])/len(classifications) print(f"Percentage of CRs: {percentage_of_crs:.2%}") print(f"Percentage of stars: {percentage_of_stars:.2%}") print(f"Total: {percentage_of_crs + percentage_of_stars:.2%}") star_label.plot(instr='STIS/CCD') for i, c in enumerate(classifications): centroid = star_stats.centroids[i] if c: patch = Circle(xy=(centroid[1], centroid[0]), radius=3, color='red', fill=False) else: patch = Circle(xy=(centroid[1], centroid[0]), radius=3, color='green', fill=False) star_label.ax1.add_patch(patch) ###Output _____no_output_____
ed_intervals.ipynb
###Markdown merge intervalshttps://www.educative.io/courses/grokking-the-coding-interview/3jyVPKRA8yx ```Intervals: [[1,4], [2,5], [7,9]]Output: [[1,5], [7,9]]Explanation: Since the first two intervals [1,4] and [2,5] overlap, we merged them into one [1,5].``` https://leetcode.com/problems/merge-intervals/submissions/ ###Code def merge(intervals): merged = [] l = len(intervals) intervals = sorted(intervals, key = lambda x: x[0]) start, end = intervals[0] for i, (new_start, new_end) in enumerate(intervals[1:]): if new_start > end: merged.append([start, end]) start = new_start end = max(end, new_end) merged.append([start, end]) return merged merge([[1, 4],[2, 5], [4, 9]]) # merge([[1, 3]]) ###Output _____no_output_____ ###Markdown Insert Interval https://www.educative.io/courses/grokking-the-coding-interview/3jKlyNMJPEM Given a list of non-overlapping intervals sorted by their start time, insert a given interval at the correct position and merge all necessary intervals to produce a list that has only mutually exclusive intervals. ```Example 1:Input: Intervals=[[1,3], [5,7], [8,12]], New Interval=[4,6]Output: [[1,3], [4,7], [8,12]]Explanation: After insertion, since [4,6] overlaps with [5,7], we merged them into one [4,7].Example 2:Input: Intervals=[[1,3], [5,7], [8,12]], New Interval=[4,10]Output: [[1,3], [4,12]]Explanation: After insertion, since [4,10] overlaps with [5,7] & [8,12], we merged them into [4,12].Example 3:Input: Intervals=[[2,3],[5,7]], New Interval=[1,4]Output: [[1,4], [5,7]]Explanation: After insertion, since [1,4] overlaps with [2,3], we merged them into one [1,4].``` ###Code def insert(intervals, new_interval): n = len(intervals) i = 0 res = [] # skip until we find an interval ending after the start of the new interval while i < n and intervals[i][1] < new_interval[0]: res.append(intervals[i]) i += 1 # combine intervals into new interval until there is overlap between current inteval and the new interval # or unit start of current interval is less than end of new interval while i < n and intervals[i][0] <= new_interval[1]: new_interval[0] = min(intervals[i][0], new_interval[0]) new_interval[1] = max(intervals[i][1], new_interval[1]) i += 1 res.append(new_interval) while i < n: res.append(intervals[i]) i += 1 return res ###Output _____no_output_____ ###Markdown min meetings rooms https://www.educative.io/courses/grokking-the-coding-interview/JQMAmrVPL7l ```Example 1:Meetings: [[1,4], [2,5], [7,9]]Output: 2Explanation: Since [1,4] and [2,5] overlap, we need two rooms to hold these two meetings. [7,9] can occur in any of the two rooms later.Example 2:Meetings: [[6,7], [2,4], [8,12]]Output: 1Explanation: None of the meetings overlap, therefore we only need one room to hold all meetings.Example 3:Meetings: [[1,4], [2,3], [3,6]]Output:2Explanation: Since [1,4] overlaps with the other two meetings [2,3] and [3,6], we need two rooms to hold all the meetings.Example 4:Meetings: [[4,5], [2,3], [2,4], [3,5]]Output: 2Explanation: We will need one room for [2,3] and [3,5], and another room for [2,4] and [4,5].Here is a visual representation of Example 4:``` ###Code import heapq def min_meeting_rooms(meetings): """ we don't need the heap len var since we never pop without pushing so we can just return the len of heap """ if not meetings: return 0 meetings = sorted(meetings, key = lambda x: x[0]) minheap = [meetings[0][1]] # max_heap_len = 1 for start, end in meetings[1:]: # if the new meeting starts before or when the soonest ending one ends if start >= minheap[0]: heapq.heappushpop(minheap, end) else: heapq.heappush(minheap, end) # max_heap_len = max(max_heap_len, len(minheap)) return len(minheap) min_meeting_rooms([[1,4], [2,5], [7,9]]) min_meeting_rooms([[6,7], [2,4], [8,12]]) min_meeting_rooms([[1,4], [2,3], [3,6]]) min_meeting_rooms([[1,4], [5,6], [8,9], [2,6], [3,6], [12, 14], [10, 13], [8, 11]]) min_meeting_rooms([[4,5], [2,3], [2,4], [3,5]]) ###Output _____no_output_____ ###Markdown Maximum CPU Load (hard) ```Example 1:Jobs: [[1,4,3], [2,5,4], [7,9,6]]Output: 7Explanation: Since [1,4,3] and [2,5,4] overlap, their maximum CPU load (3+4=7) will be when both the jobs are running at the same time i.e., during the time interval (2,4).Example 2:Jobs: [[6,7,10], [2,4,11], [8,12,15]]Output: 15Explanation: None of the jobs overlap, therefore we will take the maximum load of any job which is 15.Example 3:Jobs: [[1,4,2], [2,4,1], [3,6,5]]Output: 8Explanation: Maximum CPU load will be 8 as all jobs overlap during the time interval [3,4]. ``` ###Code def find_max_cpu_load(jobs): if not jobs: return 0 jobs = sorted(jobs, key = lambda x: x[0]) current_load = jobs[0][2] max_load = current_load minheap = [(jobs[0][1], current_load)] for start, end, load in jobs[1:]: while len(minheap) > 0 and start >= minheap[0][0]: _ , neg_load = heapq.heappop(minheap) current_load -= neg_load heapq.heappush(minheap, (end, load)) current_load += load max_load = max(max_load, current_load) return max_load find_max_cpu_load([[1,4,3], [2,5,4], [7,9,6]]), find_max_cpu_load([[6,7,10], [2,4,11], [8,12,15]]) find_max_cpu_load([[1,4,2], [2,4,1], [3,6,5]]) ###Output _____no_output_____ ###Markdown Free Employee Intervals https://www.educative.io/courses/grokking-the-coding-interview/RLwKZWgMJ1q ###Code def find_employee_free_time(intervals): intervals = sum(intervals, []) intervals = sorted(intervals, key = lambda x: x[0]) res = [] start, end = intervals[0] for st, en in intervals[1:]: if st > end: res.append((end, st)) end = max(end, en) return res intervals = [[(1, 3), (5, 6)], [(2, 3), (6, 8)]] find_employee_free_time(intervals) intervals = [[(1, 3), (9, 12)], [(2, 4)], [(6, 8)]] find_employee_free_time(intervals) intervals = [[(1, 3)], [(2, 4)], [(3, 5), (7, 9)]] find_employee_free_time(intervals) ###Output _____no_output_____
xinetzone/docs/tutorial/tvmc_command_line_driver.ipynb
###Markdown (sphx_glr_tutorial_tvmc_command_line_driver.py)= 用 TVMC 编译和优化模型原作者:[Leandro Nunes](https://github.com/leandron), [Matthew Barrett](https://github.com/mbaret), [Chris Hoge](https://github.com/hogepodge)在本节中,将使用 TVMC,即 TVM 命令行驱动程序。TVMC 工具,它暴露了 TVM 的功能,如 auto-tuning、编译、profiling 和通过命令行界面执行模型。在完成本节内容后,将使用 TVMC 来完成以下任务:* 为 TVM 运行时编译预训练 ResNet-50 v2 模型。* 通过编译后的模型运行真实图像,并解释输出和模型的性能。* 使用 TVM 在 CPU 上调优模型。* 使用 TVM 收集的调优数据重新编译优化模型。* 通过优化后的模型运行图像,并比较输出和模型的性能。本节的目的是让你了解 TVM 和 TVMC 的能力,并为理解 TVM 的工作原理奠定基础。 使用 TVMCTVMC 是 Python 应用程序,是 TVM Python 软件包的一部分。当你使用 Python 包安装 TVM 时,你将得到 TVMC 作为命令行应用程序,名为 ``tvmc``。这个命令的位置将取决于你的平台和安装方法。另外,如果你在 ``$PYTHONPATH`` 上将 TVM 作为 Python 模块,你可以通过可执行的 python 模块 ``python -m tvm.driver.tvmc`` 访问命令行驱动功能。为简单起见,本教程将提到 TVMC 命令行使用 ``tvmc ``,但同样的结果可以用 ``python -m tvm.driver.tvmc ``。你可以使用帮助页面查看: ###Code !python -m tvm.driver.tvmc --help ###Output usage: tvmc [--config CONFIG] [-v] [--version] [-h] {micro,run,tune,compile} ... TVM compiler driver options: --config CONFIG configuration json file -v, --verbose increase verbosity --version print the version and exit -h, --help show this help message and exit. commands: {micro,run,tune,compile} micro select micro context. run run a compiled module tune auto-tune a model compile compile a model. TVMC - TVM driver command-line interface ###Markdown ``tvmc`` 可用的 TVM 的主要功能来自子命令 ``compile`` 和 ``run``,以及 ``tune``。要了解某个子命令下的具体选项,请使用 ``tvmc --help``。将在本教程中逐一介绍这些命令,但首先需要下载预训练模型来使用。 获得模型在本教程中,将使用 ResNet-50 v2。ResNet-50 是卷积神经网络,有 50 层深度,设计用于图像分类。将使用的模型已经在超过一百万张图片上进行了预训练,有 1000 种不同的分类。该网络输入图像大小为 224x224。如果你有兴趣探究更多关于 ResNet-50 模型的结构,建议下载 `[Netron](https://netron.app),它免费提供的 ML 模型查看器。在本教程中,将使用 ONNX 格式的模型。 ###Code !wget https://github.com/onnx/models/raw/main/vision/classification/resnet/model/resnet50-v2-7.onnx ###Output --2022-04-26 13:07:52-- https://github.com/onnx/models/raw/main/vision/classification/resnet/model/resnet50-v2-7.onnx Resolving github.com (github.com)... 20.205.243.166 Connecting to github.com (github.com)|20.205.243.166|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://media.githubusercontent.com/media/onnx/models/main/vision/classification/resnet/model/resnet50-v2-7.onnx [following] --2022-04-26 13:07:53-- https://media.githubusercontent.com/media/onnx/models/main/vision/classification/resnet/model/resnet50-v2-7.onnx Resolving media.githubusercontent.com (media.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.110.133, ... Connecting to media.githubusercontent.com (media.githubusercontent.com)|185.199.111.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 102442450 (98M) [application/octet-stream] Saving to: ‘resnet50-v2-7.onnx’ resnet50-v2-7.onnx 100%[===================>] 97.70M 4.51MB/s in 25s 2022-04-26 13:08:27 (3.89 MB/s) - ‘resnet50-v2-7.onnx’ saved [102442450/102442450] ###Markdown 为了让该模型可以被其他教程使用,需要: ###Code !mv resnet50-v2-7.onnx ../../_models/resnet50-v2-7.onnx ###Output _____no_output_____ ###Markdown ```{admonition} 支持的模型格式TVMC 支持用 Keras、ONNX、TensorFlow、TFLite 和 Torch 创建的模型。如果你需要明确地提供你所使用的模型格式,请使用选项 ``--model-format``。```更多信息见: ###Code !python -m tvm.driver.tvmc compile --help ###Output usage: tvmc compile [-h] [--cross-compiler CROSS_COMPILER] [--cross-compiler-options CROSS_COMPILER_OPTIONS] [--desired-layout {NCHW,NHWC}] [--dump-code FORMAT] [--model-format {keras,onnx,pb,tflite,pytorch,paddle}] [-o OUTPUT] [-f {so,mlf}] [--pass-config name=value] [--target TARGET] [--target-example_target_hook-from_device TARGET_EXAMPLE_TARGET_HOOK_FROM_DEVICE] [--target-example_target_hook-libs TARGET_EXAMPLE_TARGET_HOOK_LIBS] [--target-example_target_hook-model TARGET_EXAMPLE_TARGET_HOOK_MODEL] [--target-example_target_hook-tag TARGET_EXAMPLE_TARGET_HOOK_TAG] [--target-example_target_hook-device TARGET_EXAMPLE_TARGET_HOOK_DEVICE] [--target-example_target_hook-keys TARGET_EXAMPLE_TARGET_HOOK_KEYS] [--target-ext_dev-from_device TARGET_EXT_DEV_FROM_DEVICE] [--target-ext_dev-libs TARGET_EXT_DEV_LIBS] [--target-ext_dev-model TARGET_EXT_DEV_MODEL] [--target-ext_dev-system-lib TARGET_EXT_DEV_SYSTEM_LIB] [--target-ext_dev-tag TARGET_EXT_DEV_TAG] [--target-ext_dev-device TARGET_EXT_DEV_DEVICE] [--target-ext_dev-keys TARGET_EXT_DEV_KEYS] [--target-llvm-fast-math TARGET_LLVM_FAST_MATH] [--target-llvm-opt-level TARGET_LLVM_OPT_LEVEL] [--target-llvm-unpacked-api TARGET_LLVM_UNPACKED_API] [--target-llvm-from_device TARGET_LLVM_FROM_DEVICE] [--target-llvm-fast-math-ninf TARGET_LLVM_FAST_MATH_NINF] [--target-llvm-mattr TARGET_LLVM_MATTR] [--target-llvm-num-cores TARGET_LLVM_NUM_CORES] [--target-llvm-libs TARGET_LLVM_LIBS] [--target-llvm-fast-math-nsz TARGET_LLVM_FAST_MATH_NSZ] [--target-llvm-link-params TARGET_LLVM_LINK_PARAMS] [--target-llvm-interface-api TARGET_LLVM_INTERFACE_API] [--target-llvm-fast-math-contract TARGET_LLVM_FAST_MATH_CONTRACT] [--target-llvm-system-lib TARGET_LLVM_SYSTEM_LIB] [--target-llvm-tag TARGET_LLVM_TAG] [--target-llvm-mtriple TARGET_LLVM_MTRIPLE] [--target-llvm-model TARGET_LLVM_MODEL] [--target-llvm-mfloat-abi TARGET_LLVM_MFLOAT_ABI] [--target-llvm-mcpu TARGET_LLVM_MCPU] [--target-llvm-device TARGET_LLVM_DEVICE] [--target-llvm-runtime TARGET_LLVM_RUNTIME] [--target-llvm-fast-math-arcp TARGET_LLVM_FAST_MATH_ARCP] [--target-llvm-fast-math-reassoc TARGET_LLVM_FAST_MATH_REASSOC] [--target-llvm-mabi TARGET_LLVM_MABI] [--target-llvm-keys TARGET_LLVM_KEYS] [--target-llvm-fast-math-nnan TARGET_LLVM_FAST_MATH_NNAN] [--target-hybrid-from_device TARGET_HYBRID_FROM_DEVICE] [--target-hybrid-libs TARGET_HYBRID_LIBS] [--target-hybrid-model TARGET_HYBRID_MODEL] [--target-hybrid-system-lib TARGET_HYBRID_SYSTEM_LIB] [--target-hybrid-tag TARGET_HYBRID_TAG] [--target-hybrid-device TARGET_HYBRID_DEVICE] [--target-hybrid-keys TARGET_HYBRID_KEYS] [--target-aocl-from_device TARGET_AOCL_FROM_DEVICE] [--target-aocl-libs TARGET_AOCL_LIBS] [--target-aocl-model TARGET_AOCL_MODEL] [--target-aocl-system-lib TARGET_AOCL_SYSTEM_LIB] [--target-aocl-tag TARGET_AOCL_TAG] [--target-aocl-device TARGET_AOCL_DEVICE] [--target-aocl-keys TARGET_AOCL_KEYS] [--target-nvptx-max_num_threads TARGET_NVPTX_MAX_NUM_THREADS] [--target-nvptx-thread_warp_size TARGET_NVPTX_THREAD_WARP_SIZE] [--target-nvptx-from_device TARGET_NVPTX_FROM_DEVICE] [--target-nvptx-libs TARGET_NVPTX_LIBS] [--target-nvptx-model TARGET_NVPTX_MODEL] [--target-nvptx-system-lib TARGET_NVPTX_SYSTEM_LIB] [--target-nvptx-mtriple TARGET_NVPTX_MTRIPLE] [--target-nvptx-tag TARGET_NVPTX_TAG] [--target-nvptx-mcpu TARGET_NVPTX_MCPU] [--target-nvptx-device TARGET_NVPTX_DEVICE] [--target-nvptx-keys TARGET_NVPTX_KEYS] [--target-opencl-max_num_threads TARGET_OPENCL_MAX_NUM_THREADS] [--target-opencl-thread_warp_size TARGET_OPENCL_THREAD_WARP_SIZE] [--target-opencl-from_device TARGET_OPENCL_FROM_DEVICE] [--target-opencl-libs TARGET_OPENCL_LIBS] [--target-opencl-model TARGET_OPENCL_MODEL] [--target-opencl-system-lib TARGET_OPENCL_SYSTEM_LIB] [--target-opencl-tag TARGET_OPENCL_TAG] [--target-opencl-device TARGET_OPENCL_DEVICE] [--target-opencl-keys TARGET_OPENCL_KEYS] [--target-metal-max_num_threads TARGET_METAL_MAX_NUM_THREADS] [--target-metal-thread_warp_size TARGET_METAL_THREAD_WARP_SIZE] [--target-metal-from_device TARGET_METAL_FROM_DEVICE] [--target-metal-libs TARGET_METAL_LIBS] [--target-metal-keys TARGET_METAL_KEYS] [--target-metal-model TARGET_METAL_MODEL] [--target-metal-system-lib TARGET_METAL_SYSTEM_LIB] [--target-metal-tag TARGET_METAL_TAG] [--target-metal-device TARGET_METAL_DEVICE] [--target-metal-max_function_args TARGET_METAL_MAX_FUNCTION_ARGS] [--target-webgpu-max_num_threads TARGET_WEBGPU_MAX_NUM_THREADS] [--target-webgpu-from_device TARGET_WEBGPU_FROM_DEVICE] [--target-webgpu-libs TARGET_WEBGPU_LIBS] [--target-webgpu-model TARGET_WEBGPU_MODEL] [--target-webgpu-system-lib TARGET_WEBGPU_SYSTEM_LIB] [--target-webgpu-tag TARGET_WEBGPU_TAG] [--target-webgpu-device TARGET_WEBGPU_DEVICE] [--target-webgpu-keys TARGET_WEBGPU_KEYS] [--target-rocm-max_num_threads TARGET_ROCM_MAX_NUM_THREADS] [--target-rocm-thread_warp_size TARGET_ROCM_THREAD_WARP_SIZE] [--target-rocm-from_device TARGET_ROCM_FROM_DEVICE] [--target-rocm-libs TARGET_ROCM_LIBS] [--target-rocm-mattr TARGET_ROCM_MATTR] [--target-rocm-max_shared_memory_per_block TARGET_ROCM_MAX_SHARED_MEMORY_PER_BLOCK] [--target-rocm-model TARGET_ROCM_MODEL] [--target-rocm-system-lib TARGET_ROCM_SYSTEM_LIB] [--target-rocm-mtriple TARGET_ROCM_MTRIPLE] [--target-rocm-tag TARGET_ROCM_TAG] [--target-rocm-device TARGET_ROCM_DEVICE] [--target-rocm-mcpu TARGET_ROCM_MCPU] [--target-rocm-max_threads_per_block TARGET_ROCM_MAX_THREADS_PER_BLOCK] [--target-rocm-keys TARGET_ROCM_KEYS] [--target-vulkan-max_num_threads TARGET_VULKAN_MAX_NUM_THREADS] [--target-vulkan-thread_warp_size TARGET_VULKAN_THREAD_WARP_SIZE] [--target-vulkan-from_device TARGET_VULKAN_FROM_DEVICE] [--target-vulkan-max_per_stage_descriptor_storage_buffer TARGET_VULKAN_MAX_PER_STAGE_DESCRIPTOR_STORAGE_BUFFER] [--target-vulkan-driver_version TARGET_VULKAN_DRIVER_VERSION] [--target-vulkan-supports_16bit_buffer TARGET_VULKAN_SUPPORTS_16BIT_BUFFER] [--target-vulkan-max_block_size_z TARGET_VULKAN_MAX_BLOCK_SIZE_Z] [--target-vulkan-libs TARGET_VULKAN_LIBS] [--target-vulkan-supports_dedicated_allocation TARGET_VULKAN_SUPPORTS_DEDICATED_ALLOCATION] [--target-vulkan-supported_subgroup_operations TARGET_VULKAN_SUPPORTED_SUBGROUP_OPERATIONS] [--target-vulkan-mattr TARGET_VULKAN_MATTR] [--target-vulkan-max_storage_buffer_range TARGET_VULKAN_MAX_STORAGE_BUFFER_RANGE] [--target-vulkan-max_push_constants_size TARGET_VULKAN_MAX_PUSH_CONSTANTS_SIZE] [--target-vulkan-supports_push_descriptor TARGET_VULKAN_SUPPORTS_PUSH_DESCRIPTOR] [--target-vulkan-supports_int64 TARGET_VULKAN_SUPPORTS_INT64] [--target-vulkan-supports_float32 TARGET_VULKAN_SUPPORTS_FLOAT32] [--target-vulkan-model TARGET_VULKAN_MODEL] [--target-vulkan-max_block_size_x TARGET_VULKAN_MAX_BLOCK_SIZE_X] [--target-vulkan-system-lib TARGET_VULKAN_SYSTEM_LIB] [--target-vulkan-max_block_size_y TARGET_VULKAN_MAX_BLOCK_SIZE_Y] [--target-vulkan-tag TARGET_VULKAN_TAG] [--target-vulkan-supports_int8 TARGET_VULKAN_SUPPORTS_INT8] [--target-vulkan-max_spirv_version TARGET_VULKAN_MAX_SPIRV_VERSION] [--target-vulkan-vulkan_api_version TARGET_VULKAN_VULKAN_API_VERSION] [--target-vulkan-supports_8bit_buffer TARGET_VULKAN_SUPPORTS_8BIT_BUFFER] [--target-vulkan-device_type TARGET_VULKAN_DEVICE_TYPE] [--target-vulkan-supports_int32 TARGET_VULKAN_SUPPORTS_INT32] [--target-vulkan-device TARGET_VULKAN_DEVICE] [--target-vulkan-max_threads_per_block TARGET_VULKAN_MAX_THREADS_PER_BLOCK] [--target-vulkan-max_uniform_buffer_range TARGET_VULKAN_MAX_UNIFORM_BUFFER_RANGE] [--target-vulkan-driver_name TARGET_VULKAN_DRIVER_NAME] [--target-vulkan-supports_integer_dot_product TARGET_VULKAN_SUPPORTS_INTEGER_DOT_PRODUCT] [--target-vulkan-supports_storage_buffer_storage_class TARGET_VULKAN_SUPPORTS_STORAGE_BUFFER_STORAGE_CLASS] [--target-vulkan-supports_float16 TARGET_VULKAN_SUPPORTS_FLOAT16] [--target-vulkan-device_name TARGET_VULKAN_DEVICE_NAME] [--target-vulkan-supports_float64 TARGET_VULKAN_SUPPORTS_FLOAT64] [--target-vulkan-keys TARGET_VULKAN_KEYS] [--target-vulkan-max_shared_memory_per_block TARGET_VULKAN_MAX_SHARED_MEMORY_PER_BLOCK] [--target-vulkan-supports_int16 TARGET_VULKAN_SUPPORTS_INT16] [--target-cuda-max_num_threads TARGET_CUDA_MAX_NUM_THREADS] [--target-cuda-thread_warp_size TARGET_CUDA_THREAD_WARP_SIZE] [--target-cuda-from_device TARGET_CUDA_FROM_DEVICE] [--target-cuda-arch TARGET_CUDA_ARCH] [--target-cuda-libs TARGET_CUDA_LIBS] [--target-cuda-max_shared_memory_per_block TARGET_CUDA_MAX_SHARED_MEMORY_PER_BLOCK] [--target-cuda-model TARGET_CUDA_MODEL] [--target-cuda-system-lib TARGET_CUDA_SYSTEM_LIB] [--target-cuda-tag TARGET_CUDA_TAG] [--target-cuda-device TARGET_CUDA_DEVICE] [--target-cuda-mcpu TARGET_CUDA_MCPU] [--target-cuda-max_threads_per_block TARGET_CUDA_MAX_THREADS_PER_BLOCK] [--target-cuda-registers_per_block TARGET_CUDA_REGISTERS_PER_BLOCK] [--target-cuda-keys TARGET_CUDA_KEYS] [--target-sdaccel-from_device TARGET_SDACCEL_FROM_DEVICE] [--target-sdaccel-libs TARGET_SDACCEL_LIBS] [--target-sdaccel-model TARGET_SDACCEL_MODEL] [--target-sdaccel-system-lib TARGET_SDACCEL_SYSTEM_LIB] [--target-sdaccel-tag TARGET_SDACCEL_TAG] [--target-sdaccel-device TARGET_SDACCEL_DEVICE] [--target-sdaccel-keys TARGET_SDACCEL_KEYS] [--target-composite-from_device TARGET_COMPOSITE_FROM_DEVICE] [--target-composite-libs TARGET_COMPOSITE_LIBS] [--target-composite-devices TARGET_COMPOSITE_DEVICES] [--target-composite-model TARGET_COMPOSITE_MODEL] [--target-composite-tag TARGET_COMPOSITE_TAG] [--target-composite-device TARGET_COMPOSITE_DEVICE] [--target-composite-keys TARGET_COMPOSITE_KEYS] [--target-stackvm-from_device TARGET_STACKVM_FROM_DEVICE] [--target-stackvm-libs TARGET_STACKVM_LIBS] [--target-stackvm-model TARGET_STACKVM_MODEL] [--target-stackvm-system-lib TARGET_STACKVM_SYSTEM_LIB] [--target-stackvm-tag TARGET_STACKVM_TAG] [--target-stackvm-device TARGET_STACKVM_DEVICE] [--target-stackvm-keys TARGET_STACKVM_KEYS] [--target-aocl_sw_emu-from_device TARGET_AOCL_SW_EMU_FROM_DEVICE] [--target-aocl_sw_emu-libs TARGET_AOCL_SW_EMU_LIBS] [--target-aocl_sw_emu-model TARGET_AOCL_SW_EMU_MODEL] [--target-aocl_sw_emu-system-lib TARGET_AOCL_SW_EMU_SYSTEM_LIB] [--target-aocl_sw_emu-tag TARGET_AOCL_SW_EMU_TAG] [--target-aocl_sw_emu-device TARGET_AOCL_SW_EMU_DEVICE] [--target-aocl_sw_emu-keys TARGET_AOCL_SW_EMU_KEYS] [--target-c-unpacked-api TARGET_C_UNPACKED_API] [--target-c-from_device TARGET_C_FROM_DEVICE] [--target-c-libs TARGET_C_LIBS] [--target-c-constants-byte-alignment TARGET_C_CONSTANTS_BYTE_ALIGNMENT] [--target-c-executor TARGET_C_EXECUTOR] [--target-c-link-params TARGET_C_LINK_PARAMS] [--target-c-model TARGET_C_MODEL] [--target-c-workspace-byte-alignment TARGET_C_WORKSPACE_BYTE_ALIGNMENT] [--target-c-system-lib TARGET_C_SYSTEM_LIB] [--target-c-tag TARGET_C_TAG] [--target-c-interface-api TARGET_C_INTERFACE_API] [--target-c-mcpu TARGET_C_MCPU] [--target-c-device TARGET_C_DEVICE] [--target-c-runtime TARGET_C_RUNTIME] [--target-c-keys TARGET_C_KEYS] [--target-c-march TARGET_C_MARCH] [--target-hexagon-from_device TARGET_HEXAGON_FROM_DEVICE] [--target-hexagon-libs TARGET_HEXAGON_LIBS] [--target-hexagon-mattr TARGET_HEXAGON_MATTR] [--target-hexagon-model TARGET_HEXAGON_MODEL] [--target-hexagon-llvm-options TARGET_HEXAGON_LLVM_OPTIONS] [--target-hexagon-mtriple TARGET_HEXAGON_MTRIPLE] [--target-hexagon-system-lib TARGET_HEXAGON_SYSTEM_LIB] [--target-hexagon-mcpu TARGET_HEXAGON_MCPU] [--target-hexagon-device TARGET_HEXAGON_DEVICE] [--target-hexagon-tag TARGET_HEXAGON_TAG] [--target-hexagon-link-params TARGET_HEXAGON_LINK_PARAMS] [--target-hexagon-keys TARGET_HEXAGON_KEYS] [--tuning-records PATH] [--executor EXECUTOR] [--executor-graph-link-params EXECUTOR_GRAPH_LINK_PARAMS] [--executor-aot-workspace-byte-alignment EXECUTOR_AOT_WORKSPACE_BYTE_ALIGNMENT] [--executor-aot-unpacked-api EXECUTOR_AOT_UNPACKED_API] [--executor-aot-interface-api EXECUTOR_AOT_INTERFACE_API] [--executor-aot-link-params EXECUTOR_AOT_LINK_PARAMS] [--runtime RUNTIME] [--runtime-cpp-system-lib RUNTIME_CPP_SYSTEM_LIB] [--runtime-crt-system-lib RUNTIME_CRT_SYSTEM_LIB] [-v] [-O [0-3]] [--input-shapes INPUT_SHAPES] [--disabled-pass DISABLED_PASS] [--module-name MODULE_NAME] FILE positional arguments: FILE path to the input model file. options: -h, --help show this help message and exit --cross-compiler CROSS_COMPILER the cross compiler to generate target libraries, e.g. 'aarch64-linux-gnu-gcc'. --cross-compiler-options CROSS_COMPILER_OPTIONS the cross compiler options to generate target libraries, e.g. '-mfpu=neon-vfpv4'. --desired-layout {NCHW,NHWC} change the data layout of the whole graph. --dump-code FORMAT comma separated list of formats to export the input model, e.g. 'asm,ll,relay'. --model-format {keras,onnx,pb,tflite,pytorch,paddle} specify input model format. -o OUTPUT, --output OUTPUT output the compiled module to a specified archive. Defaults to 'module.tar'. -f {so,mlf}, --output-format {so,mlf} output format. Use 'so' for shared object or 'mlf' for Model Library Format (only for microTVM targets). Defaults to 'so'. --pass-config name=value configurations to be used at compile time. This option can be provided multiple times, each one to set one configuration value, e.g. '--pass-config relay.backend.use_auto_scheduler=0', e.g. '--pass- config tir.add_lower_pass=opt_level1,pass1,opt_level2,pass2'. --target TARGET compilation target as plain string, inline JSON or path to a JSON file --tuning-records PATH path to an auto-tuning log file by AutoTVM. If not presented, the fallback/tophub configs will be used. --executor EXECUTOR Executor to compile the model with --runtime RUNTIME Runtime to compile the model with -v, --verbose increase verbosity. -O [0-3], --opt-level [0-3] specify which optimization level to use. Defaults to '3'. --input-shapes INPUT_SHAPES specify non-generic shapes for model to run, format is "input_name:[dim1,dim2,...,dimn] input_name2:[dim1,dim2]". --disabled-pass DISABLED_PASS disable specific passes, comma-separated list of pass names. --module-name MODULE_NAME The output module name. Defaults to 'default'. target example_target_hook: --target-example_target_hook-from_device TARGET_EXAMPLE_TARGET_HOOK_FROM_DEVICE target example_target_hook from_device --target-example_target_hook-libs TARGET_EXAMPLE_TARGET_HOOK_LIBS target example_target_hook libs options --target-example_target_hook-model TARGET_EXAMPLE_TARGET_HOOK_MODEL target example_target_hook model string --target-example_target_hook-tag TARGET_EXAMPLE_TARGET_HOOK_TAG target example_target_hook tag string --target-example_target_hook-device TARGET_EXAMPLE_TARGET_HOOK_DEVICE target example_target_hook device string --target-example_target_hook-keys TARGET_EXAMPLE_TARGET_HOOK_KEYS target example_target_hook keys options target ext_dev: --target-ext_dev-from_device TARGET_EXT_DEV_FROM_DEVICE target ext_dev from_device --target-ext_dev-libs TARGET_EXT_DEV_LIBS target ext_dev libs options --target-ext_dev-model TARGET_EXT_DEV_MODEL target ext_dev model string --target-ext_dev-system-lib TARGET_EXT_DEV_SYSTEM_LIB target ext_dev system-lib --target-ext_dev-tag TARGET_EXT_DEV_TAG target ext_dev tag string --target-ext_dev-device TARGET_EXT_DEV_DEVICE target ext_dev device string --target-ext_dev-keys TARGET_EXT_DEV_KEYS target ext_dev keys options target llvm: --target-llvm-fast-math TARGET_LLVM_FAST_MATH target llvm fast-math --target-llvm-opt-level TARGET_LLVM_OPT_LEVEL target llvm opt-level --target-llvm-unpacked-api TARGET_LLVM_UNPACKED_API target llvm unpacked-api --target-llvm-from_device TARGET_LLVM_FROM_DEVICE target llvm from_device --target-llvm-fast-math-ninf TARGET_LLVM_FAST_MATH_NINF target llvm fast-math-ninf --target-llvm-mattr TARGET_LLVM_MATTR target llvm mattr options --target-llvm-num-cores TARGET_LLVM_NUM_CORES target llvm num-cores --target-llvm-libs TARGET_LLVM_LIBS target llvm libs options --target-llvm-fast-math-nsz TARGET_LLVM_FAST_MATH_NSZ target llvm fast-math-nsz --target-llvm-link-params TARGET_LLVM_LINK_PARAMS target llvm link-params --target-llvm-interface-api TARGET_LLVM_INTERFACE_API target llvm interface-api string --target-llvm-fast-math-contract TARGET_LLVM_FAST_MATH_CONTRACT target llvm fast-math-contract --target-llvm-system-lib TARGET_LLVM_SYSTEM_LIB target llvm system-lib --target-llvm-tag TARGET_LLVM_TAG target llvm tag string --target-llvm-mtriple TARGET_LLVM_MTRIPLE target llvm mtriple string --target-llvm-model TARGET_LLVM_MODEL target llvm model string --target-llvm-mfloat-abi TARGET_LLVM_MFLOAT_ABI target llvm mfloat-abi string --target-llvm-mcpu TARGET_LLVM_MCPU target llvm mcpu string --target-llvm-device TARGET_LLVM_DEVICE target llvm device string --target-llvm-runtime TARGET_LLVM_RUNTIME target llvm runtime string --target-llvm-fast-math-arcp TARGET_LLVM_FAST_MATH_ARCP target llvm fast-math-arcp --target-llvm-fast-math-reassoc TARGET_LLVM_FAST_MATH_REASSOC target llvm fast-math-reassoc --target-llvm-mabi TARGET_LLVM_MABI target llvm mabi string --target-llvm-keys TARGET_LLVM_KEYS target llvm keys options --target-llvm-fast-math-nnan TARGET_LLVM_FAST_MATH_NNAN target llvm fast-math-nnan target hybrid: --target-hybrid-from_device TARGET_HYBRID_FROM_DEVICE target hybrid from_device --target-hybrid-libs TARGET_HYBRID_LIBS target hybrid libs options --target-hybrid-model TARGET_HYBRID_MODEL target hybrid model string --target-hybrid-system-lib TARGET_HYBRID_SYSTEM_LIB target hybrid system-lib --target-hybrid-tag TARGET_HYBRID_TAG target hybrid tag string --target-hybrid-device TARGET_HYBRID_DEVICE target hybrid device string --target-hybrid-keys TARGET_HYBRID_KEYS target hybrid keys options target aocl: --target-aocl-from_device TARGET_AOCL_FROM_DEVICE target aocl from_device --target-aocl-libs TARGET_AOCL_LIBS target aocl libs options --target-aocl-model TARGET_AOCL_MODEL target aocl model string --target-aocl-system-lib TARGET_AOCL_SYSTEM_LIB target aocl system-lib --target-aocl-tag TARGET_AOCL_TAG target aocl tag string --target-aocl-device TARGET_AOCL_DEVICE target aocl device string --target-aocl-keys TARGET_AOCL_KEYS target aocl keys options target nvptx: --target-nvptx-max_num_threads TARGET_NVPTX_MAX_NUM_THREADS target nvptx max_num_threads --target-nvptx-thread_warp_size TARGET_NVPTX_THREAD_WARP_SIZE target nvptx thread_warp_size --target-nvptx-from_device TARGET_NVPTX_FROM_DEVICE target nvptx from_device --target-nvptx-libs TARGET_NVPTX_LIBS target nvptx libs options --target-nvptx-model TARGET_NVPTX_MODEL target nvptx model string --target-nvptx-system-lib TARGET_NVPTX_SYSTEM_LIB target nvptx system-lib --target-nvptx-mtriple TARGET_NVPTX_MTRIPLE target nvptx mtriple string --target-nvptx-tag TARGET_NVPTX_TAG target nvptx tag string --target-nvptx-mcpu TARGET_NVPTX_MCPU target nvptx mcpu string --target-nvptx-device TARGET_NVPTX_DEVICE target nvptx device string --target-nvptx-keys TARGET_NVPTX_KEYS target nvptx keys options target opencl: --target-opencl-max_num_threads TARGET_OPENCL_MAX_NUM_THREADS target opencl max_num_threads --target-opencl-thread_warp_size TARGET_OPENCL_THREAD_WARP_SIZE target opencl thread_warp_size --target-opencl-from_device TARGET_OPENCL_FROM_DEVICE target opencl from_device --target-opencl-libs TARGET_OPENCL_LIBS target opencl libs options --target-opencl-model TARGET_OPENCL_MODEL target opencl model string --target-opencl-system-lib TARGET_OPENCL_SYSTEM_LIB target opencl system-lib --target-opencl-tag TARGET_OPENCL_TAG target opencl tag string --target-opencl-device TARGET_OPENCL_DEVICE target opencl device string --target-opencl-keys TARGET_OPENCL_KEYS target opencl keys options target metal: --target-metal-max_num_threads TARGET_METAL_MAX_NUM_THREADS target metal max_num_threads --target-metal-thread_warp_size TARGET_METAL_THREAD_WARP_SIZE target metal thread_warp_size --target-metal-from_device TARGET_METAL_FROM_DEVICE target metal from_device --target-metal-libs TARGET_METAL_LIBS target metal libs options --target-metal-keys TARGET_METAL_KEYS target metal keys options --target-metal-model TARGET_METAL_MODEL target metal model string --target-metal-system-lib TARGET_METAL_SYSTEM_LIB target metal system-lib --target-metal-tag TARGET_METAL_TAG target metal tag string --target-metal-device TARGET_METAL_DEVICE target metal device string --target-metal-max_function_args TARGET_METAL_MAX_FUNCTION_ARGS target metal max_function_args target webgpu: --target-webgpu-max_num_threads TARGET_WEBGPU_MAX_NUM_THREADS target webgpu max_num_threads --target-webgpu-from_device TARGET_WEBGPU_FROM_DEVICE target webgpu from_device --target-webgpu-libs TARGET_WEBGPU_LIBS target webgpu libs options --target-webgpu-model TARGET_WEBGPU_MODEL target webgpu model string --target-webgpu-system-lib TARGET_WEBGPU_SYSTEM_LIB target webgpu system-lib --target-webgpu-tag TARGET_WEBGPU_TAG target webgpu tag string --target-webgpu-device TARGET_WEBGPU_DEVICE target webgpu device string --target-webgpu-keys TARGET_WEBGPU_KEYS target webgpu keys options target rocm: --target-rocm-max_num_threads TARGET_ROCM_MAX_NUM_THREADS target rocm max_num_threads --target-rocm-thread_warp_size TARGET_ROCM_THREAD_WARP_SIZE target rocm thread_warp_size --target-rocm-from_device TARGET_ROCM_FROM_DEVICE target rocm from_device --target-rocm-libs TARGET_ROCM_LIBS target rocm libs options --target-rocm-mattr TARGET_ROCM_MATTR target rocm mattr options --target-rocm-max_shared_memory_per_block TARGET_ROCM_MAX_SHARED_MEMORY_PER_BLOCK target rocm max_shared_memory_per_block --target-rocm-model TARGET_ROCM_MODEL target rocm model string --target-rocm-system-lib TARGET_ROCM_SYSTEM_LIB target rocm system-lib --target-rocm-mtriple TARGET_ROCM_MTRIPLE target rocm mtriple string --target-rocm-tag TARGET_ROCM_TAG target rocm tag string --target-rocm-device TARGET_ROCM_DEVICE target rocm device string --target-rocm-mcpu TARGET_ROCM_MCPU target rocm mcpu string --target-rocm-max_threads_per_block TARGET_ROCM_MAX_THREADS_PER_BLOCK target rocm max_threads_per_block --target-rocm-keys TARGET_ROCM_KEYS target rocm keys options target vulkan: --target-vulkan-max_num_threads TARGET_VULKAN_MAX_NUM_THREADS target vulkan max_num_threads --target-vulkan-thread_warp_size TARGET_VULKAN_THREAD_WARP_SIZE target vulkan thread_warp_size --target-vulkan-from_device TARGET_VULKAN_FROM_DEVICE target vulkan from_device --target-vulkan-max_per_stage_descriptor_storage_buffer TARGET_VULKAN_MAX_PER_STAGE_DESCRIPTOR_STORAGE_BUFFER target vulkan max_per_stage_descriptor_storage_buffer --target-vulkan-driver_version TARGET_VULKAN_DRIVER_VERSION target vulkan driver_version --target-vulkan-supports_16bit_buffer TARGET_VULKAN_SUPPORTS_16BIT_BUFFER target vulkan supports_16bit_buffer --target-vulkan-max_block_size_z TARGET_VULKAN_MAX_BLOCK_SIZE_Z target vulkan max_block_size_z --target-vulkan-libs TARGET_VULKAN_LIBS target vulkan libs options --target-vulkan-supports_dedicated_allocation TARGET_VULKAN_SUPPORTS_DEDICATED_ALLOCATION target vulkan supports_dedicated_allocation --target-vulkan-supported_subgroup_operations TARGET_VULKAN_SUPPORTED_SUBGROUP_OPERATIONS target vulkan supported_subgroup_operations --target-vulkan-mattr TARGET_VULKAN_MATTR target vulkan mattr options --target-vulkan-max_storage_buffer_range TARGET_VULKAN_MAX_STORAGE_BUFFER_RANGE target vulkan max_storage_buffer_range --target-vulkan-max_push_constants_size TARGET_VULKAN_MAX_PUSH_CONSTANTS_SIZE target vulkan max_push_constants_size --target-vulkan-supports_push_descriptor TARGET_VULKAN_SUPPORTS_PUSH_DESCRIPTOR target vulkan supports_push_descriptor --target-vulkan-supports_int64 TARGET_VULKAN_SUPPORTS_INT64 target vulkan supports_int64 --target-vulkan-supports_float32 TARGET_VULKAN_SUPPORTS_FLOAT32 target vulkan supports_float32 --target-vulkan-model TARGET_VULKAN_MODEL target vulkan model string --target-vulkan-max_block_size_x TARGET_VULKAN_MAX_BLOCK_SIZE_X target vulkan max_block_size_x --target-vulkan-system-lib TARGET_VULKAN_SYSTEM_LIB target vulkan system-lib --target-vulkan-max_block_size_y TARGET_VULKAN_MAX_BLOCK_SIZE_Y target vulkan max_block_size_y --target-vulkan-tag TARGET_VULKAN_TAG target vulkan tag string --target-vulkan-supports_int8 TARGET_VULKAN_SUPPORTS_INT8 target vulkan supports_int8 --target-vulkan-max_spirv_version TARGET_VULKAN_MAX_SPIRV_VERSION target vulkan max_spirv_version --target-vulkan-vulkan_api_version TARGET_VULKAN_VULKAN_API_VERSION target vulkan vulkan_api_version --target-vulkan-supports_8bit_buffer TARGET_VULKAN_SUPPORTS_8BIT_BUFFER target vulkan supports_8bit_buffer --target-vulkan-device_type TARGET_VULKAN_DEVICE_TYPE target vulkan device_type string --target-vulkan-supports_int32 TARGET_VULKAN_SUPPORTS_INT32 target vulkan supports_int32 --target-vulkan-device TARGET_VULKAN_DEVICE target vulkan device string --target-vulkan-max_threads_per_block TARGET_VULKAN_MAX_THREADS_PER_BLOCK target vulkan max_threads_per_block --target-vulkan-max_uniform_buffer_range TARGET_VULKAN_MAX_UNIFORM_BUFFER_RANGE target vulkan max_uniform_buffer_range --target-vulkan-driver_name TARGET_VULKAN_DRIVER_NAME target vulkan driver_name string --target-vulkan-supports_integer_dot_product TARGET_VULKAN_SUPPORTS_INTEGER_DOT_PRODUCT target vulkan supports_integer_dot_product --target-vulkan-supports_storage_buffer_storage_class TARGET_VULKAN_SUPPORTS_STORAGE_BUFFER_STORAGE_CLASS target vulkan supports_storage_buffer_storage_class --target-vulkan-supports_float16 TARGET_VULKAN_SUPPORTS_FLOAT16 target vulkan supports_float16 --target-vulkan-device_name TARGET_VULKAN_DEVICE_NAME target vulkan device_name string --target-vulkan-supports_float64 TARGET_VULKAN_SUPPORTS_FLOAT64 target vulkan supports_float64 --target-vulkan-keys TARGET_VULKAN_KEYS target vulkan keys options --target-vulkan-max_shared_memory_per_block TARGET_VULKAN_MAX_SHARED_MEMORY_PER_BLOCK target vulkan max_shared_memory_per_block --target-vulkan-supports_int16 TARGET_VULKAN_SUPPORTS_INT16 target vulkan supports_int16 target cuda: --target-cuda-max_num_threads TARGET_CUDA_MAX_NUM_THREADS target cuda max_num_threads --target-cuda-thread_warp_size TARGET_CUDA_THREAD_WARP_SIZE target cuda thread_warp_size --target-cuda-from_device TARGET_CUDA_FROM_DEVICE target cuda from_device --target-cuda-arch TARGET_CUDA_ARCH target cuda arch string --target-cuda-libs TARGET_CUDA_LIBS target cuda libs options --target-cuda-max_shared_memory_per_block TARGET_CUDA_MAX_SHARED_MEMORY_PER_BLOCK target cuda max_shared_memory_per_block --target-cuda-model TARGET_CUDA_MODEL target cuda model string --target-cuda-system-lib TARGET_CUDA_SYSTEM_LIB target cuda system-lib --target-cuda-tag TARGET_CUDA_TAG target cuda tag string --target-cuda-device TARGET_CUDA_DEVICE target cuda device string --target-cuda-mcpu TARGET_CUDA_MCPU target cuda mcpu string --target-cuda-max_threads_per_block TARGET_CUDA_MAX_THREADS_PER_BLOCK target cuda max_threads_per_block --target-cuda-registers_per_block TARGET_CUDA_REGISTERS_PER_BLOCK target cuda registers_per_block --target-cuda-keys TARGET_CUDA_KEYS target cuda keys options target sdaccel: --target-sdaccel-from_device TARGET_SDACCEL_FROM_DEVICE target sdaccel from_device --target-sdaccel-libs TARGET_SDACCEL_LIBS target sdaccel libs options --target-sdaccel-model TARGET_SDACCEL_MODEL target sdaccel model string --target-sdaccel-system-lib TARGET_SDACCEL_SYSTEM_LIB target sdaccel system-lib --target-sdaccel-tag TARGET_SDACCEL_TAG target sdaccel tag string --target-sdaccel-device TARGET_SDACCEL_DEVICE target sdaccel device string --target-sdaccel-keys TARGET_SDACCEL_KEYS target sdaccel keys options target composite: --target-composite-from_device TARGET_COMPOSITE_FROM_DEVICE target composite from_device --target-composite-libs TARGET_COMPOSITE_LIBS target composite libs options --target-composite-devices TARGET_COMPOSITE_DEVICES target composite devices options --target-composite-model TARGET_COMPOSITE_MODEL target composite model string --target-composite-tag TARGET_COMPOSITE_TAG target composite tag string --target-composite-device TARGET_COMPOSITE_DEVICE target composite device string --target-composite-keys TARGET_COMPOSITE_KEYS target composite keys options target stackvm: --target-stackvm-from_device TARGET_STACKVM_FROM_DEVICE target stackvm from_device --target-stackvm-libs TARGET_STACKVM_LIBS target stackvm libs options --target-stackvm-model TARGET_STACKVM_MODEL target stackvm model string --target-stackvm-system-lib TARGET_STACKVM_SYSTEM_LIB target stackvm system-lib --target-stackvm-tag TARGET_STACKVM_TAG target stackvm tag string --target-stackvm-device TARGET_STACKVM_DEVICE target stackvm device string --target-stackvm-keys TARGET_STACKVM_KEYS target stackvm keys options target aocl_sw_emu: --target-aocl_sw_emu-from_device TARGET_AOCL_SW_EMU_FROM_DEVICE target aocl_sw_emu from_device --target-aocl_sw_emu-libs TARGET_AOCL_SW_EMU_LIBS target aocl_sw_emu libs options --target-aocl_sw_emu-model TARGET_AOCL_SW_EMU_MODEL target aocl_sw_emu model string --target-aocl_sw_emu-system-lib TARGET_AOCL_SW_EMU_SYSTEM_LIB target aocl_sw_emu system-lib --target-aocl_sw_emu-tag TARGET_AOCL_SW_EMU_TAG target aocl_sw_emu tag string --target-aocl_sw_emu-device TARGET_AOCL_SW_EMU_DEVICE target aocl_sw_emu device string --target-aocl_sw_emu-keys TARGET_AOCL_SW_EMU_KEYS target aocl_sw_emu keys options target c: --target-c-unpacked-api TARGET_C_UNPACKED_API target c unpacked-api --target-c-from_device TARGET_C_FROM_DEVICE target c from_device --target-c-libs TARGET_C_LIBS target c libs options --target-c-constants-byte-alignment TARGET_C_CONSTANTS_BYTE_ALIGNMENT target c constants-byte-alignment --target-c-executor TARGET_C_EXECUTOR target c executor string --target-c-link-params TARGET_C_LINK_PARAMS target c link-params --target-c-model TARGET_C_MODEL target c model string --target-c-workspace-byte-alignment TARGET_C_WORKSPACE_BYTE_ALIGNMENT target c workspace-byte-alignment --target-c-system-lib TARGET_C_SYSTEM_LIB target c system-lib --target-c-tag TARGET_C_TAG target c tag string --target-c-interface-api TARGET_C_INTERFACE_API target c interface-api string --target-c-mcpu TARGET_C_MCPU target c mcpu string --target-c-device TARGET_C_DEVICE target c device string --target-c-runtime TARGET_C_RUNTIME target c runtime string --target-c-keys TARGET_C_KEYS target c keys options --target-c-march TARGET_C_MARCH target c march string target hexagon: --target-hexagon-from_device TARGET_HEXAGON_FROM_DEVICE target hexagon from_device --target-hexagon-libs TARGET_HEXAGON_LIBS target hexagon libs options --target-hexagon-mattr TARGET_HEXAGON_MATTR target hexagon mattr options --target-hexagon-model TARGET_HEXAGON_MODEL target hexagon model string --target-hexagon-llvm-options TARGET_HEXAGON_LLVM_OPTIONS target hexagon llvm-options options --target-hexagon-mtriple TARGET_HEXAGON_MTRIPLE target hexagon mtriple string --target-hexagon-system-lib TARGET_HEXAGON_SYSTEM_LIB target hexagon system-lib --target-hexagon-mcpu TARGET_HEXAGON_MCPU target hexagon mcpu string --target-hexagon-device TARGET_HEXAGON_DEVICE target hexagon device string --target-hexagon-tag TARGET_HEXAGON_TAG target hexagon tag string --target-hexagon-link-params TARGET_HEXAGON_LINK_PARAMS target hexagon link-params --target-hexagon-keys TARGET_HEXAGON_KEYS target hexagon keys options executor graph: --executor-graph-link-params EXECUTOR_GRAPH_LINK_PARAMS Executor graph link-params executor aot: --executor-aot-workspace-byte-alignment EXECUTOR_AOT_WORKSPACE_BYTE_ALIGNMENT Executor aot workspace-byte-alignment --executor-aot-unpacked-api EXECUTOR_AOT_UNPACKED_API Executor aot unpacked-api --executor-aot-interface-api EXECUTOR_AOT_INTERFACE_API Executor aot interface-api string --executor-aot-link-params EXECUTOR_AOT_LINK_PARAMS Executor aot link-params runtime cpp: --runtime-cpp-system-lib RUNTIME_CPP_SYSTEM_LIB Runtime cpp system-lib runtime crt: --runtime-crt-system-lib RUNTIME_CRT_SYSTEM_LIB Runtime crt system-lib ###Markdown ```{admonition} 为 TVM 添加 ONNX 支持TVM 依赖于你系统中的 ONNX python 库。你可以使用 ``pip3 install --user onnx onnxoptimizer`` 命令来安装 ONNX。如果你有 root 权限并且想全局安装 ONNX,你可以去掉 ``--user`` 选项。对 ``onnxoptimizer`` 的依赖是可选的,仅用于 ``onnx>=1.9``。``` 将 ONNX 模型编译到 TVM 运行时中一旦下载了 ResNet-50 模型,下一步就是对其进行编译。为了达到这个目的,将使用 ``tvmc compile``。从编译过程中得到的输出是模型的 TAR 包,它被编译成目标平台的动态库。可以使用 TVM 运行时在目标设备上运行该模型。 ###Code # 这可能需要几分钟的时间,取决于你的机器 !python -m tvm.driver.tvmc compile --target "llvm" \ --output resnet50-v2-7-tvm.tar \ ../../_models/resnet50-v2-7.onnx ###Output One or more operators have not been tuned. Please tune your model for better performance. Use DEBUG logging level to see more details. ###Markdown 查看 ``tvmc compile`` 在 module 中创建的文件: ###Code %%bash mkdir model tar -xvf resnet50-v2-7-tvm.tar -C model ###Output mod.so mod.json mod.params ###Markdown 列出了三个文件:* ``mod.so`` 是模型,表示为 C++ 库,可以被 TVM 运行时加载。* ``mod.json`` 是 TVM Relay 计算图的文本表示。* ``mod.params`` 是包含预训练模型参数的文件。该 module 可以被你的应用程序直接加载,而 model 可以通过 TVM 运行时 API 运行。```{admonition} 定义正确的 target指定正确的目标(选项 ``--target``)可以对编译后的模块的性能产生巨大的影响,因为它可以利用目标上可用的硬件特性。 欲了解更多信息,请参考 [为 x86 CPU 自动调优卷积网络](tune_relay_x86)。建议确定你运行的是哪种 CPU,以及可选的功能,并适当地设置目标。``` 用 TVMC 从编译的模块中运行模型已经将模型编译到模块,可以使用 TVM 运行时来进行预测。TVMC 内置了 TVM 运行时,允许你运行编译的 TVM 模型。为了使用 TVMC 来运行模型并进行预测,需要两样东西:- 编译后的模块,我们刚刚生成出来。- 对模型的有效输入,以进行预测。当涉及到预期的张量形状、格式和数据类型时,每个模型都很特别。出于这个原因,大多数模型需要一些预处理和后处理,以确保输入是有效的,并解释输出结果。TVMC 对输入和输出数据都采用了 NumPy 的 ``.npz`` 格式。这是得到良好支持的 NumPy 格式,可以将多个数组序列化为文件。作为本教程的输入,将使用一只猫的图像,但你可以自由地用你选择的任何图像来代替这个图像。 输入预处理对于 ResNet-50 v2 模型,预期输入是 ImageNet 格式的。下面是为 ResNet-50 v2 预处理图像的脚本例子。你将需要安装支持的 Python 图像库的版本。你可以使用 ``pip3 install --user pillow`` 来满足脚本的这个要求。 ###Code #!python ./preprocess.py from tvm.contrib.download import download_testdata from PIL import Image import numpy as np img_url = "https://s3.amazonaws.com/model-server/inputs/kitten.jpg" img_path = download_testdata(img_url, "imagenet_cat.png", module="data") # Resize it to 224x224 resized_image = Image.open(img_path).resize((224, 224)) img_data = np.asarray(resized_image).astype("float32") # ONNX expects NCHW input, so convert the array img_data = np.transpose(img_data, (2, 0, 1)) # Normalize according to ImageNet imagenet_mean = np.array([0.485, 0.456, 0.406]) imagenet_stddev = np.array([0.229, 0.224, 0.225]) norm_img_data = np.zeros(img_data.shape).astype("float32") for i in range(img_data.shape[0]): norm_img_data[i, :, :] = (img_data[i, :, :] / 255 - imagenet_mean[i]) / imagenet_stddev[i] # Add batch dimension img_data = np.expand_dims(norm_img_data, axis=0) # Save to .npz (outputs imagenet_cat.npz) np.savez("imagenet_cat", data=img_data) ###Output _____no_output_____ ###Markdown 运行已编译的模块有了模型和输入数据,现在可以运行 TVMC 来做预测: ###Code !python -m tvm.driver.tvmc run \ --inputs imagenet_cat.npz \ --output predictions.npz \ resnet50-v2-7-tvm.tar ###Output _____no_output_____ ###Markdown 回顾一下, ``.tar`` 模型文件包括 C++ 库,对 Relay 模型的描述,以及模型的参数。TVMC 包括 TVM 运行时,它可以加载模型并根据输入进行预测。当运行上述命令时,TVMC 会输出新文件,``predictions.npz``,其中包含 NumPy 格式的模型输出张量。在这个例子中,在用于编译的同一台机器上运行该模型。在某些情况下,可能想通过 RPC Tracker 远程运行它。要阅读更多关于这些选项的信息,请查看: ###Code !python -m tvm.driver.tvmc run --help ###Output usage: tvmc run [-h] [--device {cpu,cuda,cl,metal,vulkan,rocm,micro}] [--fill-mode {zeros,ones,random}] [-i INPUTS] [-o OUTPUTS] [--print-time] [--print-top N] [--profile] [--end-to-end] [--repeat N] [--number N] [--rpc-key RPC_KEY] [--rpc-tracker RPC_TRACKER] [--list-options] PATH positional arguments: PATH path to the compiled module file or to the project directory if '--device micro' is selected. optional arguments: -h, --help show this help message and exit --device {cpu,cuda,cl,metal,vulkan,rocm,micro} target device to run the compiled module. Defaults to 'cpu' --fill-mode {zeros,ones,random} fill all input tensors with values. In case --inputs/-i is provided, they will take precedence over --fill-mode. Any remaining inputs will be filled using the chosen fill mode. Defaults to 'random' -i INPUTS, --inputs INPUTS path to the .npz input file -o OUTPUTS, --outputs OUTPUTS path to the .npz output file --print-time record and print the execution time(s). (non-micro devices only) --print-top N print the top n values and indices of the output tensor --profile generate profiling data from the runtime execution. Using --profile requires the Graph Executor Debug enabled on TVM. Profiling may also have an impact on inference time, making it take longer to be generated. (non-micro devices only) --end-to-end Measure data transfers as well as model execution. This can provide a more realistic performance measurement in many cases. --repeat N run the model n times. Defaults to '1' --number N repeat the run n times. Defaults to '1' --rpc-key RPC_KEY the RPC tracker key of the target device. (non-micro devices only) --rpc-tracker RPC_TRACKER hostname (required) and port (optional, defaults to 9090) of the RPC tracker, e.g. '192.168.0.100:9999'. (non-micro devices only) --list-options show all run options and option choices when '--device micro' is selected. (micro devices only) ###Markdown 输出后处理如前所述,每个模型都会有自己的特定方式来提供输出张量。需要运行一些后处理,利用为模型提供的查找表,将 ResNet-50 v2 的输出渲染成人类可读的形式。下面的脚本显示了后处理的例子,从编译的模块的输出中提取标签。运行这个脚本应该产生以下输出: ###Code #!python ./postprocess.py import os.path import numpy as np from scipy.special import softmax from tvm.contrib.download import download_testdata # Download a list of labels labels_url = "https://s3.amazonaws.com/onnx-model-zoo/synset.txt" labels_path = download_testdata(labels_url, "synset.txt", module="data") with open(labels_path, "r") as f: labels = [l.rstrip() for l in f] output_file = "predictions.npz" # Open the output and read the output tensor if os.path.exists(output_file): with np.load(output_file) as data: scores = softmax(data["output_0"]) scores = np.squeeze(scores) ranks = np.argsort(scores)[::-1] for rank in ranks[0:5]: print("class='%s' with probability=%f" % (labels[rank], scores[rank])) ###Output class='n02123045 tabby, tabby cat' with probability=0.621104 class='n02123159 tiger cat' with probability=0.356378 class='n02124075 Egyptian cat' with probability=0.019712 class='n02129604 tiger, Panthera tigris' with probability=0.001215 class='n04040759 radiator' with probability=0.000262 ###Markdown 试着用其他图像替换猫的图像,看看 ResNet 模型会做出什么样的预测。 自动调优 ResNet 模型之前的模型是为了在 TVM 运行时工作而编译的,但不包括任何特定平台的优化。在本节中,将展示如何使用 TVMC 建立针对你工作平台的优化模型。在某些情况下,当使用编译模块运行推理时,可能无法获得预期的性能。在这种情况下,可以利用自动调优器,为模型找到更好的配置,获得性能的提升。TVM 中的调优是指对模型进行优化以在给定目标上更快地运行的过程。这与训练或微调不同,因为它不影响模型的准确性,而只影响运行时的性能。作为调优过程的一部分,TVM 将尝试运行许多不同的运算器实现变体,以观察哪些算子表现最佳。这些运行的结果被存储在调优记录文件中,这最终是 ``tune`` 子命令的输出。在最简单的形式下,调优要求你提供三样东西:- 你打算在这个模型上运行的设备的目标规格- 输出文件的路径,调优记录将被保存在该文件中- 最后是要调优的模型的路径。默认搜索算法需要 `xgboost`,请参阅下面关于优化搜索算法的详细信息:```bashpip install xgboost cloudpickle```下面的例子展示了这一做法的实际效果: ###Code !python -m tvm.driver.tvmc tune --target "llvm" \ --output resnet50-v2-7-autotuner_records.json \ ../../_models/resnet50-v2-7.onnx ###Output /media/pc/data/4tb/lxw/anaconda3/envs/mx39/lib/python3.9/site-packages/xgboost/compat.py:36: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead. from pandas import MultiIndex, Int64Index [Task 1/25] Current/Best: 139.87/ 252.51 GFLOPS | Progress: (40/40) | 20.88 s Done. [Task 2/25] Current/Best: 42.44/ 183.76 GFLOPS | Progress: (40/40) | 11.12 s Done. [Task 3/25] Current/Best: 176.21/ 215.65 GFLOPS | Progress: (40/40) | 11.55 s Done. [Task 4/25] Current/Best: 113.94/ 160.83 GFLOPS | Progress: (40/40) | 13.36 s Done. [Task 5/25] Current/Best: 120.38/ 164.05 GFLOPS | Progress: (40/40) | 12.15 s Done. [Task 6/25] Current/Best: 103.44/ 188.69 GFLOPS | Progress: (40/40) | 12.60 s Done. [Task 7/25] Current/Best: 137.09/ 204.00 GFLOPS | Progress: (40/40) | 11.36 s Done. [Task 8/25] Current/Best: 99.24/ 195.34 GFLOPS | Progress: (40/40) | 18.87 s Done. [Task 9/25] Current/Best: 70.21/ 189.30 GFLOPS | Progress: (40/40) | 19.84 s Done. [Task 10/25] Current/Best: 139.57/ 150.27 GFLOPS | Progress: (40/40) | 11.81 s Done. [Task 11/25] Current/Best: 136.51/ 192.55 GFLOPS | Progress: (40/40) | 11.38 s Done. [Task 12/25] Current/Best: 127.62/ 216.62 GFLOPS | Progress: (40/40) | 15.05 s Done. [Task 13/25] Current/Best: 76.30/ 237.37 GFLOPS | Progress: (40/40) | 12.29 s Done. [Task 14/25] Current/Best: 67.69/ 197.50 GFLOPS | Progress: (40/40) | 17.04 s Done. [Task 16/25] Current/Best: 57.91/ 200.78 GFLOPS | Progress: (40/40) | 12.76 s Done. [Task 17/25] Current/Best: 172.88/ 267.60 GFLOPS | Progress: (40/40) | 12.21 s Done. [Task 18/25] Current/Best: 164.30/ 195.15 GFLOPS | Progress: (40/40) | 18.82 s Done. [Task 19/25] Current/Best: 122.30/ 209.99 GFLOPS | Progress: (40/40) | 14.50 s Done. [Task 22/25] Current/Best: 0.00/ 0.00 GFLOPS | Progress: (0/40) | 0.00 s s Done. Done. Done. [Task 22/25] Current/Best: 69.31/ 177.25 GFLOPS | Progress: (40/40) | 12.39 s Done. [Task 23/25] Current/Best: 92.92/ 185.29 GFLOPS | Progress: (40/40) | 13.99 s Done. [Task 25/25] Current/Best: 18.40/ 84.62 GFLOPS | Progress: (40/40) | 20.26 s Done. Done. ###Markdown 在这个例子中,如果你为 ``--target`` 标志指出更具体的目标,你会看到更好的结果。TVMC 将对模型的参数空间进行搜索,尝试不同的运算符配置,并选择在你的平台上运行最快的一个。尽管这是基于 CPU 和模型操作的指导性搜索,但仍可能需要几个小时来完成搜索。这个搜索的输出将被保存到 ``resnet50-v2-7-autotuner_records.json`` 文件中,以后将被用来编译优化的模型。```{admonition} 定义调优搜索算法默认情况下,这种搜索是使用 ``XGBoost Grid`` 算法引导的。根据你的模型的复杂性和可利用的时间,你可能想选择不同的算法。完整的列表可以通过查阅:``` ###Code !python -m tvm.driver.tvmc tune --help ###Output usage: tvmc tune [-h] [--early-stopping EARLY_STOPPING] [--min-repeat-ms MIN_REPEAT_MS] [--model-format {keras,onnx,pb,tflite,pytorch,paddle}] [--number NUMBER] -o OUTPUT [--parallel PARALLEL] [--repeat REPEAT] [--rpc-key RPC_KEY] [--rpc-tracker RPC_TRACKER] [--target TARGET] [--target-example_target_hook-from_device TARGET_EXAMPLE_TARGET_HOOK_FROM_DEVICE] [--target-example_target_hook-libs TARGET_EXAMPLE_TARGET_HOOK_LIBS] [--target-example_target_hook-model TARGET_EXAMPLE_TARGET_HOOK_MODEL] [--target-example_target_hook-tag TARGET_EXAMPLE_TARGET_HOOK_TAG] [--target-example_target_hook-device TARGET_EXAMPLE_TARGET_HOOK_DEVICE] [--target-example_target_hook-keys TARGET_EXAMPLE_TARGET_HOOK_KEYS] [--target-ext_dev-from_device TARGET_EXT_DEV_FROM_DEVICE] [--target-ext_dev-libs TARGET_EXT_DEV_LIBS] [--target-ext_dev-model TARGET_EXT_DEV_MODEL] [--target-ext_dev-system-lib TARGET_EXT_DEV_SYSTEM_LIB] [--target-ext_dev-tag TARGET_EXT_DEV_TAG] [--target-ext_dev-device TARGET_EXT_DEV_DEVICE] [--target-ext_dev-keys TARGET_EXT_DEV_KEYS] [--target-llvm-fast-math TARGET_LLVM_FAST_MATH] [--target-llvm-opt-level TARGET_LLVM_OPT_LEVEL] [--target-llvm-unpacked-api TARGET_LLVM_UNPACKED_API] [--target-llvm-from_device TARGET_LLVM_FROM_DEVICE] [--target-llvm-fast-math-ninf TARGET_LLVM_FAST_MATH_NINF] [--target-llvm-mattr TARGET_LLVM_MATTR] [--target-llvm-num-cores TARGET_LLVM_NUM_CORES] [--target-llvm-libs TARGET_LLVM_LIBS] [--target-llvm-fast-math-nsz TARGET_LLVM_FAST_MATH_NSZ] [--target-llvm-link-params TARGET_LLVM_LINK_PARAMS] [--target-llvm-interface-api TARGET_LLVM_INTERFACE_API] [--target-llvm-fast-math-contract TARGET_LLVM_FAST_MATH_CONTRACT] [--target-llvm-system-lib TARGET_LLVM_SYSTEM_LIB] [--target-llvm-tag TARGET_LLVM_TAG] [--target-llvm-mtriple TARGET_LLVM_MTRIPLE] [--target-llvm-model TARGET_LLVM_MODEL] [--target-llvm-mfloat-abi TARGET_LLVM_MFLOAT_ABI] [--target-llvm-mcpu TARGET_LLVM_MCPU] [--target-llvm-device TARGET_LLVM_DEVICE] [--target-llvm-runtime TARGET_LLVM_RUNTIME] [--target-llvm-fast-math-arcp TARGET_LLVM_FAST_MATH_ARCP] [--target-llvm-fast-math-reassoc TARGET_LLVM_FAST_MATH_REASSOC] [--target-llvm-mabi TARGET_LLVM_MABI] [--target-llvm-keys TARGET_LLVM_KEYS] [--target-llvm-fast-math-nnan TARGET_LLVM_FAST_MATH_NNAN] [--target-hybrid-from_device TARGET_HYBRID_FROM_DEVICE] [--target-hybrid-libs TARGET_HYBRID_LIBS] [--target-hybrid-model TARGET_HYBRID_MODEL] [--target-hybrid-system-lib TARGET_HYBRID_SYSTEM_LIB] [--target-hybrid-tag TARGET_HYBRID_TAG] [--target-hybrid-device TARGET_HYBRID_DEVICE] [--target-hybrid-keys TARGET_HYBRID_KEYS] [--target-aocl-from_device TARGET_AOCL_FROM_DEVICE] [--target-aocl-libs TARGET_AOCL_LIBS] [--target-aocl-model TARGET_AOCL_MODEL] [--target-aocl-system-lib TARGET_AOCL_SYSTEM_LIB] [--target-aocl-tag TARGET_AOCL_TAG] [--target-aocl-device TARGET_AOCL_DEVICE] [--target-aocl-keys TARGET_AOCL_KEYS] [--target-nvptx-max_num_threads TARGET_NVPTX_MAX_NUM_THREADS] [--target-nvptx-thread_warp_size TARGET_NVPTX_THREAD_WARP_SIZE] [--target-nvptx-from_device TARGET_NVPTX_FROM_DEVICE] [--target-nvptx-libs TARGET_NVPTX_LIBS] [--target-nvptx-model TARGET_NVPTX_MODEL] [--target-nvptx-system-lib TARGET_NVPTX_SYSTEM_LIB] [--target-nvptx-mtriple TARGET_NVPTX_MTRIPLE] [--target-nvptx-tag TARGET_NVPTX_TAG] [--target-nvptx-mcpu TARGET_NVPTX_MCPU] [--target-nvptx-device TARGET_NVPTX_DEVICE] [--target-nvptx-keys TARGET_NVPTX_KEYS] [--target-opencl-max_num_threads TARGET_OPENCL_MAX_NUM_THREADS] [--target-opencl-thread_warp_size TARGET_OPENCL_THREAD_WARP_SIZE] [--target-opencl-from_device TARGET_OPENCL_FROM_DEVICE] [--target-opencl-libs TARGET_OPENCL_LIBS] [--target-opencl-model TARGET_OPENCL_MODEL] [--target-opencl-system-lib TARGET_OPENCL_SYSTEM_LIB] [--target-opencl-tag TARGET_OPENCL_TAG] [--target-opencl-device TARGET_OPENCL_DEVICE] [--target-opencl-keys TARGET_OPENCL_KEYS] [--target-metal-max_num_threads TARGET_METAL_MAX_NUM_THREADS] [--target-metal-thread_warp_size TARGET_METAL_THREAD_WARP_SIZE] [--target-metal-from_device TARGET_METAL_FROM_DEVICE] [--target-metal-libs TARGET_METAL_LIBS] [--target-metal-keys TARGET_METAL_KEYS] [--target-metal-model TARGET_METAL_MODEL] [--target-metal-system-lib TARGET_METAL_SYSTEM_LIB] [--target-metal-tag TARGET_METAL_TAG] [--target-metal-device TARGET_METAL_DEVICE] [--target-metal-max_function_args TARGET_METAL_MAX_FUNCTION_ARGS] [--target-webgpu-max_num_threads TARGET_WEBGPU_MAX_NUM_THREADS] [--target-webgpu-from_device TARGET_WEBGPU_FROM_DEVICE] [--target-webgpu-libs TARGET_WEBGPU_LIBS] [--target-webgpu-model TARGET_WEBGPU_MODEL] [--target-webgpu-system-lib TARGET_WEBGPU_SYSTEM_LIB] [--target-webgpu-tag TARGET_WEBGPU_TAG] [--target-webgpu-device TARGET_WEBGPU_DEVICE] [--target-webgpu-keys TARGET_WEBGPU_KEYS] [--target-rocm-max_num_threads TARGET_ROCM_MAX_NUM_THREADS] [--target-rocm-thread_warp_size TARGET_ROCM_THREAD_WARP_SIZE] [--target-rocm-from_device TARGET_ROCM_FROM_DEVICE] [--target-rocm-libs TARGET_ROCM_LIBS] [--target-rocm-mattr TARGET_ROCM_MATTR] [--target-rocm-max_shared_memory_per_block TARGET_ROCM_MAX_SHARED_MEMORY_PER_BLOCK] [--target-rocm-model TARGET_ROCM_MODEL] [--target-rocm-system-lib TARGET_ROCM_SYSTEM_LIB] [--target-rocm-mtriple TARGET_ROCM_MTRIPLE] [--target-rocm-tag TARGET_ROCM_TAG] [--target-rocm-device TARGET_ROCM_DEVICE] [--target-rocm-mcpu TARGET_ROCM_MCPU] [--target-rocm-max_threads_per_block TARGET_ROCM_MAX_THREADS_PER_BLOCK] [--target-rocm-keys TARGET_ROCM_KEYS] [--target-vulkan-max_num_threads TARGET_VULKAN_MAX_NUM_THREADS] [--target-vulkan-thread_warp_size TARGET_VULKAN_THREAD_WARP_SIZE] [--target-vulkan-from_device TARGET_VULKAN_FROM_DEVICE] [--target-vulkan-max_per_stage_descriptor_storage_buffer TARGET_VULKAN_MAX_PER_STAGE_DESCRIPTOR_STORAGE_BUFFER] [--target-vulkan-driver_version TARGET_VULKAN_DRIVER_VERSION] [--target-vulkan-supports_16bit_buffer TARGET_VULKAN_SUPPORTS_16BIT_BUFFER] [--target-vulkan-max_block_size_z TARGET_VULKAN_MAX_BLOCK_SIZE_Z] [--target-vulkan-libs TARGET_VULKAN_LIBS] [--target-vulkan-supports_dedicated_allocation TARGET_VULKAN_SUPPORTS_DEDICATED_ALLOCATION] [--target-vulkan-supported_subgroup_operations TARGET_VULKAN_SUPPORTED_SUBGROUP_OPERATIONS] [--target-vulkan-mattr TARGET_VULKAN_MATTR] [--target-vulkan-max_storage_buffer_range TARGET_VULKAN_MAX_STORAGE_BUFFER_RANGE] [--target-vulkan-max_push_constants_size TARGET_VULKAN_MAX_PUSH_CONSTANTS_SIZE] [--target-vulkan-supports_push_descriptor TARGET_VULKAN_SUPPORTS_PUSH_DESCRIPTOR] [--target-vulkan-supports_int64 TARGET_VULKAN_SUPPORTS_INT64] [--target-vulkan-supports_float32 TARGET_VULKAN_SUPPORTS_FLOAT32] [--target-vulkan-model TARGET_VULKAN_MODEL] [--target-vulkan-max_block_size_x TARGET_VULKAN_MAX_BLOCK_SIZE_X] [--target-vulkan-system-lib TARGET_VULKAN_SYSTEM_LIB] [--target-vulkan-max_block_size_y TARGET_VULKAN_MAX_BLOCK_SIZE_Y] [--target-vulkan-tag TARGET_VULKAN_TAG] [--target-vulkan-supports_int8 TARGET_VULKAN_SUPPORTS_INT8] [--target-vulkan-max_spirv_version TARGET_VULKAN_MAX_SPIRV_VERSION] [--target-vulkan-vulkan_api_version TARGET_VULKAN_VULKAN_API_VERSION] [--target-vulkan-supports_8bit_buffer TARGET_VULKAN_SUPPORTS_8BIT_BUFFER] [--target-vulkan-device_type TARGET_VULKAN_DEVICE_TYPE] [--target-vulkan-supports_int32 TARGET_VULKAN_SUPPORTS_INT32] [--target-vulkan-device TARGET_VULKAN_DEVICE] [--target-vulkan-max_threads_per_block TARGET_VULKAN_MAX_THREADS_PER_BLOCK] [--target-vulkan-max_uniform_buffer_range TARGET_VULKAN_MAX_UNIFORM_BUFFER_RANGE] [--target-vulkan-driver_name TARGET_VULKAN_DRIVER_NAME] [--target-vulkan-supports_integer_dot_product TARGET_VULKAN_SUPPORTS_INTEGER_DOT_PRODUCT] [--target-vulkan-supports_storage_buffer_storage_class TARGET_VULKAN_SUPPORTS_STORAGE_BUFFER_STORAGE_CLASS] [--target-vulkan-supports_float16 TARGET_VULKAN_SUPPORTS_FLOAT16] [--target-vulkan-device_name TARGET_VULKAN_DEVICE_NAME] [--target-vulkan-supports_float64 TARGET_VULKAN_SUPPORTS_FLOAT64] [--target-vulkan-keys TARGET_VULKAN_KEYS] [--target-vulkan-max_shared_memory_per_block TARGET_VULKAN_MAX_SHARED_MEMORY_PER_BLOCK] [--target-vulkan-supports_int16 TARGET_VULKAN_SUPPORTS_INT16] [--target-cuda-max_num_threads TARGET_CUDA_MAX_NUM_THREADS] [--target-cuda-thread_warp_size TARGET_CUDA_THREAD_WARP_SIZE] [--target-cuda-from_device TARGET_CUDA_FROM_DEVICE] [--target-cuda-arch TARGET_CUDA_ARCH] [--target-cuda-libs TARGET_CUDA_LIBS] [--target-cuda-max_shared_memory_per_block TARGET_CUDA_MAX_SHARED_MEMORY_PER_BLOCK] [--target-cuda-model TARGET_CUDA_MODEL] [--target-cuda-system-lib TARGET_CUDA_SYSTEM_LIB] [--target-cuda-tag TARGET_CUDA_TAG] [--target-cuda-device TARGET_CUDA_DEVICE] [--target-cuda-mcpu TARGET_CUDA_MCPU] [--target-cuda-max_threads_per_block TARGET_CUDA_MAX_THREADS_PER_BLOCK] [--target-cuda-registers_per_block TARGET_CUDA_REGISTERS_PER_BLOCK] [--target-cuda-keys TARGET_CUDA_KEYS] [--target-sdaccel-from_device TARGET_SDACCEL_FROM_DEVICE] [--target-sdaccel-libs TARGET_SDACCEL_LIBS] [--target-sdaccel-model TARGET_SDACCEL_MODEL] [--target-sdaccel-system-lib TARGET_SDACCEL_SYSTEM_LIB] [--target-sdaccel-tag TARGET_SDACCEL_TAG] [--target-sdaccel-device TARGET_SDACCEL_DEVICE] [--target-sdaccel-keys TARGET_SDACCEL_KEYS] [--target-composite-from_device TARGET_COMPOSITE_FROM_DEVICE] [--target-composite-libs TARGET_COMPOSITE_LIBS] [--target-composite-devices TARGET_COMPOSITE_DEVICES] [--target-composite-model TARGET_COMPOSITE_MODEL] [--target-composite-tag TARGET_COMPOSITE_TAG] [--target-composite-device TARGET_COMPOSITE_DEVICE] [--target-composite-keys TARGET_COMPOSITE_KEYS] [--target-stackvm-from_device TARGET_STACKVM_FROM_DEVICE] [--target-stackvm-libs TARGET_STACKVM_LIBS] [--target-stackvm-model TARGET_STACKVM_MODEL] [--target-stackvm-system-lib TARGET_STACKVM_SYSTEM_LIB] [--target-stackvm-tag TARGET_STACKVM_TAG] [--target-stackvm-device TARGET_STACKVM_DEVICE] [--target-stackvm-keys TARGET_STACKVM_KEYS] [--target-aocl_sw_emu-from_device TARGET_AOCL_SW_EMU_FROM_DEVICE] [--target-aocl_sw_emu-libs TARGET_AOCL_SW_EMU_LIBS] [--target-aocl_sw_emu-model TARGET_AOCL_SW_EMU_MODEL] [--target-aocl_sw_emu-system-lib TARGET_AOCL_SW_EMU_SYSTEM_LIB] [--target-aocl_sw_emu-tag TARGET_AOCL_SW_EMU_TAG] [--target-aocl_sw_emu-device TARGET_AOCL_SW_EMU_DEVICE] [--target-aocl_sw_emu-keys TARGET_AOCL_SW_EMU_KEYS] [--target-c-unpacked-api TARGET_C_UNPACKED_API] [--target-c-from_device TARGET_C_FROM_DEVICE] [--target-c-libs TARGET_C_LIBS] [--target-c-constants-byte-alignment TARGET_C_CONSTANTS_BYTE_ALIGNMENT] [--target-c-executor TARGET_C_EXECUTOR] [--target-c-link-params TARGET_C_LINK_PARAMS] [--target-c-model TARGET_C_MODEL] [--target-c-workspace-byte-alignment TARGET_C_WORKSPACE_BYTE_ALIGNMENT] [--target-c-system-lib TARGET_C_SYSTEM_LIB] [--target-c-tag TARGET_C_TAG] [--target-c-interface-api TARGET_C_INTERFACE_API] [--target-c-mcpu TARGET_C_MCPU] [--target-c-device TARGET_C_DEVICE] [--target-c-runtime TARGET_C_RUNTIME] [--target-c-keys TARGET_C_KEYS] [--target-c-march TARGET_C_MARCH] [--target-hexagon-from_device TARGET_HEXAGON_FROM_DEVICE] [--target-hexagon-libs TARGET_HEXAGON_LIBS] [--target-hexagon-mattr TARGET_HEXAGON_MATTR] [--target-hexagon-model TARGET_HEXAGON_MODEL] [--target-hexagon-llvm-options TARGET_HEXAGON_LLVM_OPTIONS] [--target-hexagon-mtriple TARGET_HEXAGON_MTRIPLE] [--target-hexagon-system-lib TARGET_HEXAGON_SYSTEM_LIB] [--target-hexagon-mcpu TARGET_HEXAGON_MCPU] [--target-hexagon-device TARGET_HEXAGON_DEVICE] [--target-hexagon-tag TARGET_HEXAGON_TAG] [--target-hexagon-link-params TARGET_HEXAGON_LINK_PARAMS] [--target-hexagon-keys TARGET_HEXAGON_KEYS] [--target-host TARGET_HOST] [--timeout TIMEOUT] [--trials TRIALS] [--tuning-records PATH] [--desired-layout {NCHW,NHWC}] [--enable-autoscheduler] [--cache-line-bytes CACHE_LINE_BYTES] [--num-cores NUM_CORES] [--vector-unit-bytes VECTOR_UNIT_BYTES] [--max-shared-memory-per-block MAX_SHARED_MEMORY_PER_BLOCK] [--max-local-memory-per-block MAX_LOCAL_MEMORY_PER_BLOCK] [--max-threads-per-block MAX_THREADS_PER_BLOCK] [--max-vthread-extent MAX_VTHREAD_EXTENT] [--warp-size WARP_SIZE] [--include-simple-tasks] [--log-estimated-latency] [--tuner {ga,gridsearch,random,xgb,xgb_knob,xgb-rank}] [--input-shapes INPUT_SHAPES] FILE positional arguments: FILE path to the input model file optional arguments: -h, --help show this help message and exit --early-stopping EARLY_STOPPING minimum number of trials before early stopping --min-repeat-ms MIN_REPEAT_MS minimum time to run each trial, in milliseconds. Defaults to 0 on x86 and 1000 on all other targets --model-format {keras,onnx,pb,tflite,pytorch,paddle} specify input model format --number NUMBER number of runs a single repeat is made of. The final number of tuning executions is: (1 + number * repeat) -o OUTPUT, --output OUTPUT output file to store the tuning records for the tuning process --parallel PARALLEL the maximum number of parallel devices to use when tuning --repeat REPEAT how many times to repeat each measurement --rpc-key RPC_KEY the RPC tracker key of the target device. Required when --rpc-tracker is provided. --rpc-tracker RPC_TRACKER hostname (required) and port (optional, defaults to 9090) of the RPC tracker, e.g. '192.168.0.100:9999' --target TARGET compilation target as plain string, inline JSON or path to a JSON file --target-host TARGET_HOST the host compilation target, defaults to None --timeout TIMEOUT compilation timeout, in seconds --trials TRIALS the maximum number of tuning trials to perform --tuning-records PATH path to an auto-tuning log file by AutoTVM. --desired-layout {NCHW,NHWC} change the data layout of the whole graph --enable-autoscheduler enable tuning the graph through the AutoScheduler tuner --input-shapes INPUT_SHAPES specify non-generic shapes for model to run, format is "input_name:[dim1,dim2,...,dimn] input_name2:[dim1,dim2]" target example_target_hook: --target-example_target_hook-from_device TARGET_EXAMPLE_TARGET_HOOK_FROM_DEVICE target example_target_hook from_device --target-example_target_hook-libs TARGET_EXAMPLE_TARGET_HOOK_LIBS target example_target_hook libs options --target-example_target_hook-model TARGET_EXAMPLE_TARGET_HOOK_MODEL target example_target_hook model string --target-example_target_hook-tag TARGET_EXAMPLE_TARGET_HOOK_TAG target example_target_hook tag string --target-example_target_hook-device TARGET_EXAMPLE_TARGET_HOOK_DEVICE target example_target_hook device string --target-example_target_hook-keys TARGET_EXAMPLE_TARGET_HOOK_KEYS target example_target_hook keys options target ext_dev: --target-ext_dev-from_device TARGET_EXT_DEV_FROM_DEVICE target ext_dev from_device --target-ext_dev-libs TARGET_EXT_DEV_LIBS target ext_dev libs options --target-ext_dev-model TARGET_EXT_DEV_MODEL target ext_dev model string --target-ext_dev-system-lib TARGET_EXT_DEV_SYSTEM_LIB target ext_dev system-lib --target-ext_dev-tag TARGET_EXT_DEV_TAG target ext_dev tag string --target-ext_dev-device TARGET_EXT_DEV_DEVICE target ext_dev device string --target-ext_dev-keys TARGET_EXT_DEV_KEYS target ext_dev keys options target llvm: --target-llvm-fast-math TARGET_LLVM_FAST_MATH target llvm fast-math --target-llvm-opt-level TARGET_LLVM_OPT_LEVEL target llvm opt-level --target-llvm-unpacked-api TARGET_LLVM_UNPACKED_API target llvm unpacked-api --target-llvm-from_device TARGET_LLVM_FROM_DEVICE target llvm from_device --target-llvm-fast-math-ninf TARGET_LLVM_FAST_MATH_NINF target llvm fast-math-ninf --target-llvm-mattr TARGET_LLVM_MATTR target llvm mattr options --target-llvm-num-cores TARGET_LLVM_NUM_CORES target llvm num-cores --target-llvm-libs TARGET_LLVM_LIBS target llvm libs options --target-llvm-fast-math-nsz TARGET_LLVM_FAST_MATH_NSZ target llvm fast-math-nsz --target-llvm-link-params TARGET_LLVM_LINK_PARAMS target llvm link-params --target-llvm-interface-api TARGET_LLVM_INTERFACE_API target llvm interface-api string --target-llvm-fast-math-contract TARGET_LLVM_FAST_MATH_CONTRACT target llvm fast-math-contract --target-llvm-system-lib TARGET_LLVM_SYSTEM_LIB target llvm system-lib --target-llvm-tag TARGET_LLVM_TAG target llvm tag string --target-llvm-mtriple TARGET_LLVM_MTRIPLE target llvm mtriple string --target-llvm-model TARGET_LLVM_MODEL target llvm model string --target-llvm-mfloat-abi TARGET_LLVM_MFLOAT_ABI target llvm mfloat-abi string --target-llvm-mcpu TARGET_LLVM_MCPU target llvm mcpu string --target-llvm-device TARGET_LLVM_DEVICE target llvm device string --target-llvm-runtime TARGET_LLVM_RUNTIME target llvm runtime string --target-llvm-fast-math-arcp TARGET_LLVM_FAST_MATH_ARCP target llvm fast-math-arcp --target-llvm-fast-math-reassoc TARGET_LLVM_FAST_MATH_REASSOC target llvm fast-math-reassoc --target-llvm-mabi TARGET_LLVM_MABI target llvm mabi string --target-llvm-keys TARGET_LLVM_KEYS target llvm keys options --target-llvm-fast-math-nnan TARGET_LLVM_FAST_MATH_NNAN target llvm fast-math-nnan target hybrid: --target-hybrid-from_device TARGET_HYBRID_FROM_DEVICE target hybrid from_device --target-hybrid-libs TARGET_HYBRID_LIBS target hybrid libs options --target-hybrid-model TARGET_HYBRID_MODEL target hybrid model string --target-hybrid-system-lib TARGET_HYBRID_SYSTEM_LIB target hybrid system-lib --target-hybrid-tag TARGET_HYBRID_TAG target hybrid tag string --target-hybrid-device TARGET_HYBRID_DEVICE target hybrid device string --target-hybrid-keys TARGET_HYBRID_KEYS target hybrid keys options target aocl: --target-aocl-from_device TARGET_AOCL_FROM_DEVICE target aocl from_device --target-aocl-libs TARGET_AOCL_LIBS target aocl libs options --target-aocl-model TARGET_AOCL_MODEL target aocl model string --target-aocl-system-lib TARGET_AOCL_SYSTEM_LIB target aocl system-lib --target-aocl-tag TARGET_AOCL_TAG target aocl tag string --target-aocl-device TARGET_AOCL_DEVICE target aocl device string --target-aocl-keys TARGET_AOCL_KEYS target aocl keys options target nvptx: --target-nvptx-max_num_threads TARGET_NVPTX_MAX_NUM_THREADS target nvptx max_num_threads --target-nvptx-thread_warp_size TARGET_NVPTX_THREAD_WARP_SIZE target nvptx thread_warp_size --target-nvptx-from_device TARGET_NVPTX_FROM_DEVICE target nvptx from_device --target-nvptx-libs TARGET_NVPTX_LIBS target nvptx libs options --target-nvptx-model TARGET_NVPTX_MODEL target nvptx model string --target-nvptx-system-lib TARGET_NVPTX_SYSTEM_LIB target nvptx system-lib --target-nvptx-mtriple TARGET_NVPTX_MTRIPLE target nvptx mtriple string --target-nvptx-tag TARGET_NVPTX_TAG target nvptx tag string --target-nvptx-mcpu TARGET_NVPTX_MCPU target nvptx mcpu string --target-nvptx-device TARGET_NVPTX_DEVICE target nvptx device string --target-nvptx-keys TARGET_NVPTX_KEYS target nvptx keys options target opencl: --target-opencl-max_num_threads TARGET_OPENCL_MAX_NUM_THREADS target opencl max_num_threads --target-opencl-thread_warp_size TARGET_OPENCL_THREAD_WARP_SIZE target opencl thread_warp_size --target-opencl-from_device TARGET_OPENCL_FROM_DEVICE target opencl from_device --target-opencl-libs TARGET_OPENCL_LIBS target opencl libs options --target-opencl-model TARGET_OPENCL_MODEL target opencl model string --target-opencl-system-lib TARGET_OPENCL_SYSTEM_LIB target opencl system-lib --target-opencl-tag TARGET_OPENCL_TAG target opencl tag string --target-opencl-device TARGET_OPENCL_DEVICE target opencl device string --target-opencl-keys TARGET_OPENCL_KEYS target opencl keys options target metal: --target-metal-max_num_threads TARGET_METAL_MAX_NUM_THREADS target metal max_num_threads --target-metal-thread_warp_size TARGET_METAL_THREAD_WARP_SIZE target metal thread_warp_size --target-metal-from_device TARGET_METAL_FROM_DEVICE target metal from_device --target-metal-libs TARGET_METAL_LIBS target metal libs options --target-metal-keys TARGET_METAL_KEYS target metal keys options --target-metal-model TARGET_METAL_MODEL target metal model string --target-metal-system-lib TARGET_METAL_SYSTEM_LIB target metal system-lib --target-metal-tag TARGET_METAL_TAG target metal tag string --target-metal-device TARGET_METAL_DEVICE target metal device string --target-metal-max_function_args TARGET_METAL_MAX_FUNCTION_ARGS target metal max_function_args target webgpu: --target-webgpu-max_num_threads TARGET_WEBGPU_MAX_NUM_THREADS target webgpu max_num_threads --target-webgpu-from_device TARGET_WEBGPU_FROM_DEVICE target webgpu from_device --target-webgpu-libs TARGET_WEBGPU_LIBS target webgpu libs options --target-webgpu-model TARGET_WEBGPU_MODEL target webgpu model string --target-webgpu-system-lib TARGET_WEBGPU_SYSTEM_LIB target webgpu system-lib --target-webgpu-tag TARGET_WEBGPU_TAG target webgpu tag string --target-webgpu-device TARGET_WEBGPU_DEVICE target webgpu device string --target-webgpu-keys TARGET_WEBGPU_KEYS target webgpu keys options target rocm: --target-rocm-max_num_threads TARGET_ROCM_MAX_NUM_THREADS target rocm max_num_threads --target-rocm-thread_warp_size TARGET_ROCM_THREAD_WARP_SIZE target rocm thread_warp_size --target-rocm-from_device TARGET_ROCM_FROM_DEVICE target rocm from_device --target-rocm-libs TARGET_ROCM_LIBS target rocm libs options --target-rocm-mattr TARGET_ROCM_MATTR target rocm mattr options --target-rocm-max_shared_memory_per_block TARGET_ROCM_MAX_SHARED_MEMORY_PER_BLOCK target rocm max_shared_memory_per_block --target-rocm-model TARGET_ROCM_MODEL target rocm model string --target-rocm-system-lib TARGET_ROCM_SYSTEM_LIB target rocm system-lib --target-rocm-mtriple TARGET_ROCM_MTRIPLE target rocm mtriple string --target-rocm-tag TARGET_ROCM_TAG target rocm tag string --target-rocm-device TARGET_ROCM_DEVICE target rocm device string --target-rocm-mcpu TARGET_ROCM_MCPU target rocm mcpu string --target-rocm-max_threads_per_block TARGET_ROCM_MAX_THREADS_PER_BLOCK target rocm max_threads_per_block --target-rocm-keys TARGET_ROCM_KEYS target rocm keys options target vulkan: --target-vulkan-max_num_threads TARGET_VULKAN_MAX_NUM_THREADS target vulkan max_num_threads --target-vulkan-thread_warp_size TARGET_VULKAN_THREAD_WARP_SIZE target vulkan thread_warp_size --target-vulkan-from_device TARGET_VULKAN_FROM_DEVICE target vulkan from_device --target-vulkan-max_per_stage_descriptor_storage_buffer TARGET_VULKAN_MAX_PER_STAGE_DESCRIPTOR_STORAGE_BUFFER target vulkan max_per_stage_descriptor_storage_buffer --target-vulkan-driver_version TARGET_VULKAN_DRIVER_VERSION target vulkan driver_version --target-vulkan-supports_16bit_buffer TARGET_VULKAN_SUPPORTS_16BIT_BUFFER target vulkan supports_16bit_buffer --target-vulkan-max_block_size_z TARGET_VULKAN_MAX_BLOCK_SIZE_Z target vulkan max_block_size_z --target-vulkan-libs TARGET_VULKAN_LIBS target vulkan libs options --target-vulkan-supports_dedicated_allocation TARGET_VULKAN_SUPPORTS_DEDICATED_ALLOCATION target vulkan supports_dedicated_allocation --target-vulkan-supported_subgroup_operations TARGET_VULKAN_SUPPORTED_SUBGROUP_OPERATIONS target vulkan supported_subgroup_operations --target-vulkan-mattr TARGET_VULKAN_MATTR target vulkan mattr options --target-vulkan-max_storage_buffer_range TARGET_VULKAN_MAX_STORAGE_BUFFER_RANGE target vulkan max_storage_buffer_range --target-vulkan-max_push_constants_size TARGET_VULKAN_MAX_PUSH_CONSTANTS_SIZE target vulkan max_push_constants_size --target-vulkan-supports_push_descriptor TARGET_VULKAN_SUPPORTS_PUSH_DESCRIPTOR target vulkan supports_push_descriptor --target-vulkan-supports_int64 TARGET_VULKAN_SUPPORTS_INT64 target vulkan supports_int64 --target-vulkan-supports_float32 TARGET_VULKAN_SUPPORTS_FLOAT32 target vulkan supports_float32 --target-vulkan-model TARGET_VULKAN_MODEL target vulkan model string --target-vulkan-max_block_size_x TARGET_VULKAN_MAX_BLOCK_SIZE_X target vulkan max_block_size_x --target-vulkan-system-lib TARGET_VULKAN_SYSTEM_LIB target vulkan system-lib --target-vulkan-max_block_size_y TARGET_VULKAN_MAX_BLOCK_SIZE_Y target vulkan max_block_size_y --target-vulkan-tag TARGET_VULKAN_TAG target vulkan tag string --target-vulkan-supports_int8 TARGET_VULKAN_SUPPORTS_INT8 target vulkan supports_int8 --target-vulkan-max_spirv_version TARGET_VULKAN_MAX_SPIRV_VERSION target vulkan max_spirv_version --target-vulkan-vulkan_api_version TARGET_VULKAN_VULKAN_API_VERSION target vulkan vulkan_api_version --target-vulkan-supports_8bit_buffer TARGET_VULKAN_SUPPORTS_8BIT_BUFFER target vulkan supports_8bit_buffer --target-vulkan-device_type TARGET_VULKAN_DEVICE_TYPE target vulkan device_type string --target-vulkan-supports_int32 TARGET_VULKAN_SUPPORTS_INT32 target vulkan supports_int32 --target-vulkan-device TARGET_VULKAN_DEVICE target vulkan device string --target-vulkan-max_threads_per_block TARGET_VULKAN_MAX_THREADS_PER_BLOCK target vulkan max_threads_per_block --target-vulkan-max_uniform_buffer_range TARGET_VULKAN_MAX_UNIFORM_BUFFER_RANGE target vulkan max_uniform_buffer_range --target-vulkan-driver_name TARGET_VULKAN_DRIVER_NAME target vulkan driver_name string --target-vulkan-supports_integer_dot_product TARGET_VULKAN_SUPPORTS_INTEGER_DOT_PRODUCT target vulkan supports_integer_dot_product --target-vulkan-supports_storage_buffer_storage_class TARGET_VULKAN_SUPPORTS_STORAGE_BUFFER_STORAGE_CLASS target vulkan supports_storage_buffer_storage_class --target-vulkan-supports_float16 TARGET_VULKAN_SUPPORTS_FLOAT16 target vulkan supports_float16 --target-vulkan-device_name TARGET_VULKAN_DEVICE_NAME target vulkan device_name string --target-vulkan-supports_float64 TARGET_VULKAN_SUPPORTS_FLOAT64 target vulkan supports_float64 --target-vulkan-keys TARGET_VULKAN_KEYS target vulkan keys options --target-vulkan-max_shared_memory_per_block TARGET_VULKAN_MAX_SHARED_MEMORY_PER_BLOCK target vulkan max_shared_memory_per_block --target-vulkan-supports_int16 TARGET_VULKAN_SUPPORTS_INT16 target vulkan supports_int16 target cuda: --target-cuda-max_num_threads TARGET_CUDA_MAX_NUM_THREADS target cuda max_num_threads --target-cuda-thread_warp_size TARGET_CUDA_THREAD_WARP_SIZE target cuda thread_warp_size --target-cuda-from_device TARGET_CUDA_FROM_DEVICE target cuda from_device --target-cuda-arch TARGET_CUDA_ARCH target cuda arch string --target-cuda-libs TARGET_CUDA_LIBS target cuda libs options --target-cuda-max_shared_memory_per_block TARGET_CUDA_MAX_SHARED_MEMORY_PER_BLOCK target cuda max_shared_memory_per_block --target-cuda-model TARGET_CUDA_MODEL target cuda model string --target-cuda-system-lib TARGET_CUDA_SYSTEM_LIB target cuda system-lib --target-cuda-tag TARGET_CUDA_TAG target cuda tag string --target-cuda-device TARGET_CUDA_DEVICE target cuda device string --target-cuda-mcpu TARGET_CUDA_MCPU target cuda mcpu string --target-cuda-max_threads_per_block TARGET_CUDA_MAX_THREADS_PER_BLOCK target cuda max_threads_per_block --target-cuda-registers_per_block TARGET_CUDA_REGISTERS_PER_BLOCK target cuda registers_per_block --target-cuda-keys TARGET_CUDA_KEYS target cuda keys options target sdaccel: --target-sdaccel-from_device TARGET_SDACCEL_FROM_DEVICE target sdaccel from_device --target-sdaccel-libs TARGET_SDACCEL_LIBS target sdaccel libs options --target-sdaccel-model TARGET_SDACCEL_MODEL target sdaccel model string --target-sdaccel-system-lib TARGET_SDACCEL_SYSTEM_LIB target sdaccel system-lib --target-sdaccel-tag TARGET_SDACCEL_TAG target sdaccel tag string --target-sdaccel-device TARGET_SDACCEL_DEVICE target sdaccel device string --target-sdaccel-keys TARGET_SDACCEL_KEYS target sdaccel keys options target composite: --target-composite-from_device TARGET_COMPOSITE_FROM_DEVICE target composite from_device --target-composite-libs TARGET_COMPOSITE_LIBS target composite libs options --target-composite-devices TARGET_COMPOSITE_DEVICES target composite devices options --target-composite-model TARGET_COMPOSITE_MODEL target composite model string --target-composite-tag TARGET_COMPOSITE_TAG target composite tag string --target-composite-device TARGET_COMPOSITE_DEVICE target composite device string --target-composite-keys TARGET_COMPOSITE_KEYS target composite keys options target stackvm: --target-stackvm-from_device TARGET_STACKVM_FROM_DEVICE target stackvm from_device --target-stackvm-libs TARGET_STACKVM_LIBS target stackvm libs options --target-stackvm-model TARGET_STACKVM_MODEL target stackvm model string --target-stackvm-system-lib TARGET_STACKVM_SYSTEM_LIB target stackvm system-lib --target-stackvm-tag TARGET_STACKVM_TAG target stackvm tag string --target-stackvm-device TARGET_STACKVM_DEVICE target stackvm device string --target-stackvm-keys TARGET_STACKVM_KEYS target stackvm keys options target aocl_sw_emu: --target-aocl_sw_emu-from_device TARGET_AOCL_SW_EMU_FROM_DEVICE target aocl_sw_emu from_device --target-aocl_sw_emu-libs TARGET_AOCL_SW_EMU_LIBS target aocl_sw_emu libs options --target-aocl_sw_emu-model TARGET_AOCL_SW_EMU_MODEL target aocl_sw_emu model string --target-aocl_sw_emu-system-lib TARGET_AOCL_SW_EMU_SYSTEM_LIB target aocl_sw_emu system-lib --target-aocl_sw_emu-tag TARGET_AOCL_SW_EMU_TAG target aocl_sw_emu tag string --target-aocl_sw_emu-device TARGET_AOCL_SW_EMU_DEVICE target aocl_sw_emu device string --target-aocl_sw_emu-keys TARGET_AOCL_SW_EMU_KEYS target aocl_sw_emu keys options target c: --target-c-unpacked-api TARGET_C_UNPACKED_API target c unpacked-api --target-c-from_device TARGET_C_FROM_DEVICE target c from_device --target-c-libs TARGET_C_LIBS target c libs options --target-c-constants-byte-alignment TARGET_C_CONSTANTS_BYTE_ALIGNMENT target c constants-byte-alignment --target-c-executor TARGET_C_EXECUTOR target c executor string --target-c-link-params TARGET_C_LINK_PARAMS target c link-params --target-c-model TARGET_C_MODEL target c model string --target-c-workspace-byte-alignment TARGET_C_WORKSPACE_BYTE_ALIGNMENT target c workspace-byte-alignment --target-c-system-lib TARGET_C_SYSTEM_LIB target c system-lib --target-c-tag TARGET_C_TAG target c tag string --target-c-interface-api TARGET_C_INTERFACE_API target c interface-api string --target-c-mcpu TARGET_C_MCPU target c mcpu string --target-c-device TARGET_C_DEVICE target c device string --target-c-runtime TARGET_C_RUNTIME target c runtime string --target-c-keys TARGET_C_KEYS target c keys options --target-c-march TARGET_C_MARCH target c march string target hexagon: --target-hexagon-from_device TARGET_HEXAGON_FROM_DEVICE target hexagon from_device --target-hexagon-libs TARGET_HEXAGON_LIBS target hexagon libs options --target-hexagon-mattr TARGET_HEXAGON_MATTR target hexagon mattr options --target-hexagon-model TARGET_HEXAGON_MODEL target hexagon model string --target-hexagon-llvm-options TARGET_HEXAGON_LLVM_OPTIONS target hexagon llvm-options options --target-hexagon-mtriple TARGET_HEXAGON_MTRIPLE target hexagon mtriple string --target-hexagon-system-lib TARGET_HEXAGON_SYSTEM_LIB target hexagon system-lib --target-hexagon-mcpu TARGET_HEXAGON_MCPU target hexagon mcpu string --target-hexagon-device TARGET_HEXAGON_DEVICE target hexagon device string --target-hexagon-tag TARGET_HEXAGON_TAG target hexagon tag string --target-hexagon-link-params TARGET_HEXAGON_LINK_PARAMS target hexagon link-params --target-hexagon-keys TARGET_HEXAGON_KEYS target hexagon keys options AutoScheduler options: AutoScheduler options, used when --enable-autoscheduler is provided --cache-line-bytes CACHE_LINE_BYTES the size of cache line in bytes. If not specified, it will be autoset for the current machine. --num-cores NUM_CORES the number of device cores. If not specified, it will be autoset for the current machine. --vector-unit-bytes VECTOR_UNIT_BYTES the width of vector units in bytes. If not specified, it will be autoset for the current machine. --max-shared-memory-per-block MAX_SHARED_MEMORY_PER_BLOCK the max shared memory per block in bytes. If not specified, it will be autoset for the current machine. --max-local-memory-per-block MAX_LOCAL_MEMORY_PER_BLOCK the max local memory per block in bytes. If not specified, it will be autoset for the current machine. --max-threads-per-block MAX_THREADS_PER_BLOCK the max number of threads per block. If not specified, it will be autoset for the current machine. --max-vthread-extent MAX_VTHREAD_EXTENT the max vthread extent. If not specified, it will be autoset for the current machine. --warp-size WARP_SIZE the thread numbers of a warp. If not specified, it will be autoset for the current machine. --include-simple-tasks whether to extract simple tasks that do not include complicated ops --log-estimated-latency whether to log the estimated latency to the file after tuning a task AutoTVM options: AutoTVM options, used when the AutoScheduler is not enabled --tuner {ga,gridsearch,random,xgb,xgb_knob,xgb-rank} type of tuner to use when tuning with autotvm. ###Markdown 对于消费级 Skylake CPU 来说,输出结果将是这样的: ###Code !python -m tvm.driver.tvmc tune \ --target "llvm -mcpu=broadwell" \ --output resnet50-v2-7-autotuner_records.json \ ../../_models/resnet50-v2-7.onnx ###Output /media/pc/data/4tb/lxw/anaconda3/envs/mx39/lib/python3.9/site-packages/xgboost/compat.py:36: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead. from pandas import MultiIndex, Int64Index [Task 1/25] Current/Best: 135.54/ 444.49 GFLOPS | Progress: (40/40) | 16.09 s Done. [Task 2/25] Current/Best: 91.39/ 426.70 GFLOPS | Progress: (40/40) | 10.33 s Done. [Task 3/25] Current/Best: 147.25/ 516.21 GFLOPS | Progress: (40/40) | 11.55 s Done. [Task 4/25] Current/Best: 561.81/ 561.81 GFLOPS | Progress: (40/40) | 12.99 s Done. [Task 5/25] Current/Best: 182.70/ 570.25 GFLOPS | Progress: (40/40) | 11.12 s Done. [Task 6/25] Current/Best: 79.82/ 459.29 GFLOPS | Progress: (40/40) | 12.03 s Done. [Task 7/25] Current/Best: 152.79/ 300.64 GFLOPS | Progress: (40/40) | 11.16 s Done. [Task 8/25] Current/Best: 155.29/ 310.77 GFLOPS | Progress: (40/40) | 14.68 s Done. [Task 9/25] Current/Best: 126.56/ 561.24 GFLOPS | Progress: (40/40) | 13.93 s Done. [Task 10/25] Current/Best: 41.68/ 517.18 GFLOPS | Progress: (40/40) | 10.91 s Done. [Task 11/25] Current/Best: 311.13/ 528.67 GFLOPS | Progress: (40/40) | 10.89 s Done. [Task 12/25] Current/Best: 265.13/ 525.74 GFLOPS | Progress: (40/40) | 11.19 s Done. [Task 13/25] Current/Best: 107.09/ 426.10 GFLOPS | Progress: (40/40) | 11.29 s Done. [Task 14/25] Current/Best: 119.32/ 373.60 GFLOPS | Progress: (40/40) | 12.38 s Done. [Task 15/25] Current/Best: 101.58/ 439.72 GFLOPS | Progress: (40/40) | 14.41 s Done. [Task 16/25] Current/Best: 177.78/ 427.98 GFLOPS | Progress: (40/40) | 10.23 s Done. [Task 17/25] Current/Best: 72.04/ 349.15 GFLOPS | Progress: (40/40) | 11.50 s Done. [Task 18/25] Current/Best: 124.41/ 500.93 GFLOPS | Progress: (40/40) | 12.07 s Done. [Task 19/25] Current/Best: 243.37/ 371.27 GFLOPS | Progress: (40/40) | 12.88 s Done. [Task 20/25] Current/Best: 137.63/ 343.57 GFLOPS | Progress: (40/40) | 21.29 s Done. [Task 21/25] Current/Best: 59.02/ 330.98 GFLOPS | Progress: (40/40) | 12.88 s Done. [Task 22/25] Current/Best: 273.71/ 457.41 GFLOPS | Progress: (40/40) | 11.04 s Done. [Task 23/25] Current/Best: 166.89/ 430.39 GFLOPS | Progress: (40/40) | 13.46 s Done. [Task 25/25] Current/Best: 28.01/ 59.42 GFLOPS | Progress: (40/40) | 20.24 s Done. Done. ###Markdown 调谐会话可能需要很长的时间,所以 ``tvmc tune`` 提供了许多选项来定制你的调谐过程,在重复次数方面(例如 ``--repeat`` 和 ``--number``),要使用的调谐算法等等。 用调优数据编译优化后的模型作为上述调谐过程的输出,获得了存储在 ``resnet50-v2-7-autotuner_records.json`` 的调谐记录。这个文件可以有两种使用方式:- 作为进一步调谐的输入(通过 ``tvmc tune --tuning-records``)。- 作为对编译器的输入编译器将使用这些结果来为你指定的目标上的模型生成高性能代码。要做到这一点,可以使用 ``tvmc compile --tuning-records``。获得更多信息: ###Code !python -m tvm.driver.tvmc compile --help ###Output usage: tvmc compile [-h] [--cross-compiler CROSS_COMPILER] [--cross-compiler-options CROSS_COMPILER_OPTIONS] [--desired-layout {NCHW,NHWC}] [--dump-code FORMAT] [--model-format {keras,onnx,pb,tflite,pytorch,paddle}] [-o OUTPUT] [-f {so,mlf}] [--pass-config name=value] [--target TARGET] [--target-example_target_hook-from_device TARGET_EXAMPLE_TARGET_HOOK_FROM_DEVICE] [--target-example_target_hook-libs TARGET_EXAMPLE_TARGET_HOOK_LIBS] [--target-example_target_hook-model TARGET_EXAMPLE_TARGET_HOOK_MODEL] [--target-example_target_hook-tag TARGET_EXAMPLE_TARGET_HOOK_TAG] [--target-example_target_hook-device TARGET_EXAMPLE_TARGET_HOOK_DEVICE] [--target-example_target_hook-keys TARGET_EXAMPLE_TARGET_HOOK_KEYS] [--target-ext_dev-from_device TARGET_EXT_DEV_FROM_DEVICE] [--target-ext_dev-libs TARGET_EXT_DEV_LIBS] [--target-ext_dev-model TARGET_EXT_DEV_MODEL] [--target-ext_dev-system-lib TARGET_EXT_DEV_SYSTEM_LIB] [--target-ext_dev-tag TARGET_EXT_DEV_TAG] [--target-ext_dev-device TARGET_EXT_DEV_DEVICE] [--target-ext_dev-keys TARGET_EXT_DEV_KEYS] [--target-llvm-fast-math TARGET_LLVM_FAST_MATH] [--target-llvm-opt-level TARGET_LLVM_OPT_LEVEL] [--target-llvm-unpacked-api TARGET_LLVM_UNPACKED_API] [--target-llvm-from_device TARGET_LLVM_FROM_DEVICE] [--target-llvm-fast-math-ninf TARGET_LLVM_FAST_MATH_NINF] [--target-llvm-mattr TARGET_LLVM_MATTR] [--target-llvm-num-cores TARGET_LLVM_NUM_CORES] [--target-llvm-libs TARGET_LLVM_LIBS] [--target-llvm-fast-math-nsz TARGET_LLVM_FAST_MATH_NSZ] [--target-llvm-link-params TARGET_LLVM_LINK_PARAMS] [--target-llvm-interface-api TARGET_LLVM_INTERFACE_API] [--target-llvm-fast-math-contract TARGET_LLVM_FAST_MATH_CONTRACT] [--target-llvm-system-lib TARGET_LLVM_SYSTEM_LIB] [--target-llvm-tag TARGET_LLVM_TAG] [--target-llvm-mtriple TARGET_LLVM_MTRIPLE] [--target-llvm-model TARGET_LLVM_MODEL] [--target-llvm-mfloat-abi TARGET_LLVM_MFLOAT_ABI] [--target-llvm-mcpu TARGET_LLVM_MCPU] [--target-llvm-device TARGET_LLVM_DEVICE] [--target-llvm-runtime TARGET_LLVM_RUNTIME] [--target-llvm-fast-math-arcp TARGET_LLVM_FAST_MATH_ARCP] [--target-llvm-fast-math-reassoc TARGET_LLVM_FAST_MATH_REASSOC] [--target-llvm-mabi TARGET_LLVM_MABI] [--target-llvm-keys TARGET_LLVM_KEYS] [--target-llvm-fast-math-nnan TARGET_LLVM_FAST_MATH_NNAN] [--target-hybrid-from_device TARGET_HYBRID_FROM_DEVICE] [--target-hybrid-libs TARGET_HYBRID_LIBS] [--target-hybrid-model TARGET_HYBRID_MODEL] [--target-hybrid-system-lib TARGET_HYBRID_SYSTEM_LIB] [--target-hybrid-tag TARGET_HYBRID_TAG] [--target-hybrid-device TARGET_HYBRID_DEVICE] [--target-hybrid-keys TARGET_HYBRID_KEYS] [--target-aocl-from_device TARGET_AOCL_FROM_DEVICE] [--target-aocl-libs TARGET_AOCL_LIBS] [--target-aocl-model TARGET_AOCL_MODEL] [--target-aocl-system-lib TARGET_AOCL_SYSTEM_LIB] [--target-aocl-tag TARGET_AOCL_TAG] [--target-aocl-device TARGET_AOCL_DEVICE] [--target-aocl-keys TARGET_AOCL_KEYS] [--target-nvptx-max_num_threads TARGET_NVPTX_MAX_NUM_THREADS] [--target-nvptx-thread_warp_size TARGET_NVPTX_THREAD_WARP_SIZE] [--target-nvptx-from_device TARGET_NVPTX_FROM_DEVICE] [--target-nvptx-libs TARGET_NVPTX_LIBS] [--target-nvptx-model TARGET_NVPTX_MODEL] [--target-nvptx-system-lib TARGET_NVPTX_SYSTEM_LIB] [--target-nvptx-mtriple TARGET_NVPTX_MTRIPLE] [--target-nvptx-tag TARGET_NVPTX_TAG] [--target-nvptx-mcpu TARGET_NVPTX_MCPU] [--target-nvptx-device TARGET_NVPTX_DEVICE] [--target-nvptx-keys TARGET_NVPTX_KEYS] [--target-opencl-max_num_threads TARGET_OPENCL_MAX_NUM_THREADS] [--target-opencl-thread_warp_size TARGET_OPENCL_THREAD_WARP_SIZE] [--target-opencl-from_device TARGET_OPENCL_FROM_DEVICE] [--target-opencl-libs TARGET_OPENCL_LIBS] [--target-opencl-model TARGET_OPENCL_MODEL] [--target-opencl-system-lib TARGET_OPENCL_SYSTEM_LIB] [--target-opencl-tag TARGET_OPENCL_TAG] [--target-opencl-device TARGET_OPENCL_DEVICE] [--target-opencl-keys TARGET_OPENCL_KEYS] [--target-metal-max_num_threads TARGET_METAL_MAX_NUM_THREADS] [--target-metal-thread_warp_size TARGET_METAL_THREAD_WARP_SIZE] [--target-metal-from_device TARGET_METAL_FROM_DEVICE] [--target-metal-libs TARGET_METAL_LIBS] [--target-metal-keys TARGET_METAL_KEYS] [--target-metal-model TARGET_METAL_MODEL] [--target-metal-system-lib TARGET_METAL_SYSTEM_LIB] [--target-metal-tag TARGET_METAL_TAG] [--target-metal-device TARGET_METAL_DEVICE] [--target-metal-max_function_args TARGET_METAL_MAX_FUNCTION_ARGS] [--target-webgpu-max_num_threads TARGET_WEBGPU_MAX_NUM_THREADS] [--target-webgpu-from_device TARGET_WEBGPU_FROM_DEVICE] [--target-webgpu-libs TARGET_WEBGPU_LIBS] [--target-webgpu-model TARGET_WEBGPU_MODEL] [--target-webgpu-system-lib TARGET_WEBGPU_SYSTEM_LIB] [--target-webgpu-tag TARGET_WEBGPU_TAG] [--target-webgpu-device TARGET_WEBGPU_DEVICE] [--target-webgpu-keys TARGET_WEBGPU_KEYS] [--target-rocm-max_num_threads TARGET_ROCM_MAX_NUM_THREADS] [--target-rocm-thread_warp_size TARGET_ROCM_THREAD_WARP_SIZE] [--target-rocm-from_device TARGET_ROCM_FROM_DEVICE] [--target-rocm-libs TARGET_ROCM_LIBS] [--target-rocm-mattr TARGET_ROCM_MATTR] [--target-rocm-max_shared_memory_per_block TARGET_ROCM_MAX_SHARED_MEMORY_PER_BLOCK] [--target-rocm-model TARGET_ROCM_MODEL] [--target-rocm-system-lib TARGET_ROCM_SYSTEM_LIB] [--target-rocm-mtriple TARGET_ROCM_MTRIPLE] [--target-rocm-tag TARGET_ROCM_TAG] [--target-rocm-device TARGET_ROCM_DEVICE] [--target-rocm-mcpu TARGET_ROCM_MCPU] [--target-rocm-max_threads_per_block TARGET_ROCM_MAX_THREADS_PER_BLOCK] [--target-rocm-keys TARGET_ROCM_KEYS] [--target-vulkan-max_num_threads TARGET_VULKAN_MAX_NUM_THREADS] [--target-vulkan-thread_warp_size TARGET_VULKAN_THREAD_WARP_SIZE] [--target-vulkan-from_device TARGET_VULKAN_FROM_DEVICE] [--target-vulkan-max_per_stage_descriptor_storage_buffer TARGET_VULKAN_MAX_PER_STAGE_DESCRIPTOR_STORAGE_BUFFER] [--target-vulkan-driver_version TARGET_VULKAN_DRIVER_VERSION] [--target-vulkan-supports_16bit_buffer TARGET_VULKAN_SUPPORTS_16BIT_BUFFER] [--target-vulkan-max_block_size_z TARGET_VULKAN_MAX_BLOCK_SIZE_Z] [--target-vulkan-libs TARGET_VULKAN_LIBS] [--target-vulkan-supports_dedicated_allocation TARGET_VULKAN_SUPPORTS_DEDICATED_ALLOCATION] [--target-vulkan-supported_subgroup_operations TARGET_VULKAN_SUPPORTED_SUBGROUP_OPERATIONS] [--target-vulkan-mattr TARGET_VULKAN_MATTR] [--target-vulkan-max_storage_buffer_range TARGET_VULKAN_MAX_STORAGE_BUFFER_RANGE] [--target-vulkan-max_push_constants_size TARGET_VULKAN_MAX_PUSH_CONSTANTS_SIZE] [--target-vulkan-supports_push_descriptor TARGET_VULKAN_SUPPORTS_PUSH_DESCRIPTOR] [--target-vulkan-supports_int64 TARGET_VULKAN_SUPPORTS_INT64] [--target-vulkan-supports_float32 TARGET_VULKAN_SUPPORTS_FLOAT32] [--target-vulkan-model TARGET_VULKAN_MODEL] [--target-vulkan-max_block_size_x TARGET_VULKAN_MAX_BLOCK_SIZE_X] [--target-vulkan-system-lib TARGET_VULKAN_SYSTEM_LIB] [--target-vulkan-max_block_size_y TARGET_VULKAN_MAX_BLOCK_SIZE_Y] [--target-vulkan-tag TARGET_VULKAN_TAG] [--target-vulkan-supports_int8 TARGET_VULKAN_SUPPORTS_INT8] [--target-vulkan-max_spirv_version TARGET_VULKAN_MAX_SPIRV_VERSION] [--target-vulkan-vulkan_api_version TARGET_VULKAN_VULKAN_API_VERSION] [--target-vulkan-supports_8bit_buffer TARGET_VULKAN_SUPPORTS_8BIT_BUFFER] [--target-vulkan-device_type TARGET_VULKAN_DEVICE_TYPE] [--target-vulkan-supports_int32 TARGET_VULKAN_SUPPORTS_INT32] [--target-vulkan-device TARGET_VULKAN_DEVICE] [--target-vulkan-max_threads_per_block TARGET_VULKAN_MAX_THREADS_PER_BLOCK] [--target-vulkan-max_uniform_buffer_range TARGET_VULKAN_MAX_UNIFORM_BUFFER_RANGE] [--target-vulkan-driver_name TARGET_VULKAN_DRIVER_NAME] [--target-vulkan-supports_integer_dot_product TARGET_VULKAN_SUPPORTS_INTEGER_DOT_PRODUCT] [--target-vulkan-supports_storage_buffer_storage_class TARGET_VULKAN_SUPPORTS_STORAGE_BUFFER_STORAGE_CLASS] [--target-vulkan-supports_float16 TARGET_VULKAN_SUPPORTS_FLOAT16] [--target-vulkan-device_name TARGET_VULKAN_DEVICE_NAME] [--target-vulkan-supports_float64 TARGET_VULKAN_SUPPORTS_FLOAT64] [--target-vulkan-keys TARGET_VULKAN_KEYS] [--target-vulkan-max_shared_memory_per_block TARGET_VULKAN_MAX_SHARED_MEMORY_PER_BLOCK] [--target-vulkan-supports_int16 TARGET_VULKAN_SUPPORTS_INT16] [--target-cuda-max_num_threads TARGET_CUDA_MAX_NUM_THREADS] [--target-cuda-thread_warp_size TARGET_CUDA_THREAD_WARP_SIZE] [--target-cuda-from_device TARGET_CUDA_FROM_DEVICE] [--target-cuda-arch TARGET_CUDA_ARCH] [--target-cuda-libs TARGET_CUDA_LIBS] [--target-cuda-max_shared_memory_per_block TARGET_CUDA_MAX_SHARED_MEMORY_PER_BLOCK] [--target-cuda-model TARGET_CUDA_MODEL] [--target-cuda-system-lib TARGET_CUDA_SYSTEM_LIB] [--target-cuda-tag TARGET_CUDA_TAG] [--target-cuda-device TARGET_CUDA_DEVICE] [--target-cuda-mcpu TARGET_CUDA_MCPU] [--target-cuda-max_threads_per_block TARGET_CUDA_MAX_THREADS_PER_BLOCK] [--target-cuda-registers_per_block TARGET_CUDA_REGISTERS_PER_BLOCK] [--target-cuda-keys TARGET_CUDA_KEYS] [--target-sdaccel-from_device TARGET_SDACCEL_FROM_DEVICE] [--target-sdaccel-libs TARGET_SDACCEL_LIBS] [--target-sdaccel-model TARGET_SDACCEL_MODEL] [--target-sdaccel-system-lib TARGET_SDACCEL_SYSTEM_LIB] [--target-sdaccel-tag TARGET_SDACCEL_TAG] [--target-sdaccel-device TARGET_SDACCEL_DEVICE] [--target-sdaccel-keys TARGET_SDACCEL_KEYS] [--target-composite-from_device TARGET_COMPOSITE_FROM_DEVICE] [--target-composite-libs TARGET_COMPOSITE_LIBS] [--target-composite-devices TARGET_COMPOSITE_DEVICES] [--target-composite-model TARGET_COMPOSITE_MODEL] [--target-composite-tag TARGET_COMPOSITE_TAG] [--target-composite-device TARGET_COMPOSITE_DEVICE] [--target-composite-keys TARGET_COMPOSITE_KEYS] [--target-stackvm-from_device TARGET_STACKVM_FROM_DEVICE] [--target-stackvm-libs TARGET_STACKVM_LIBS] [--target-stackvm-model TARGET_STACKVM_MODEL] [--target-stackvm-system-lib TARGET_STACKVM_SYSTEM_LIB] [--target-stackvm-tag TARGET_STACKVM_TAG] [--target-stackvm-device TARGET_STACKVM_DEVICE] [--target-stackvm-keys TARGET_STACKVM_KEYS] [--target-aocl_sw_emu-from_device TARGET_AOCL_SW_EMU_FROM_DEVICE] [--target-aocl_sw_emu-libs TARGET_AOCL_SW_EMU_LIBS] [--target-aocl_sw_emu-model TARGET_AOCL_SW_EMU_MODEL] [--target-aocl_sw_emu-system-lib TARGET_AOCL_SW_EMU_SYSTEM_LIB] [--target-aocl_sw_emu-tag TARGET_AOCL_SW_EMU_TAG] [--target-aocl_sw_emu-device TARGET_AOCL_SW_EMU_DEVICE] [--target-aocl_sw_emu-keys TARGET_AOCL_SW_EMU_KEYS] [--target-c-unpacked-api TARGET_C_UNPACKED_API] [--target-c-from_device TARGET_C_FROM_DEVICE] [--target-c-libs TARGET_C_LIBS] [--target-c-constants-byte-alignment TARGET_C_CONSTANTS_BYTE_ALIGNMENT] [--target-c-executor TARGET_C_EXECUTOR] [--target-c-link-params TARGET_C_LINK_PARAMS] [--target-c-model TARGET_C_MODEL] [--target-c-workspace-byte-alignment TARGET_C_WORKSPACE_BYTE_ALIGNMENT] [--target-c-system-lib TARGET_C_SYSTEM_LIB] [--target-c-tag TARGET_C_TAG] [--target-c-interface-api TARGET_C_INTERFACE_API] [--target-c-mcpu TARGET_C_MCPU] [--target-c-device TARGET_C_DEVICE] [--target-c-runtime TARGET_C_RUNTIME] [--target-c-keys TARGET_C_KEYS] [--target-c-march TARGET_C_MARCH] [--target-hexagon-from_device TARGET_HEXAGON_FROM_DEVICE] [--target-hexagon-libs TARGET_HEXAGON_LIBS] [--target-hexagon-mattr TARGET_HEXAGON_MATTR] [--target-hexagon-model TARGET_HEXAGON_MODEL] [--target-hexagon-llvm-options TARGET_HEXAGON_LLVM_OPTIONS] [--target-hexagon-mtriple TARGET_HEXAGON_MTRIPLE] [--target-hexagon-system-lib TARGET_HEXAGON_SYSTEM_LIB] [--target-hexagon-mcpu TARGET_HEXAGON_MCPU] [--target-hexagon-device TARGET_HEXAGON_DEVICE] [--target-hexagon-tag TARGET_HEXAGON_TAG] [--target-hexagon-link-params TARGET_HEXAGON_LINK_PARAMS] [--target-hexagon-keys TARGET_HEXAGON_KEYS] [--tuning-records PATH] [--executor EXECUTOR] [--executor-graph-link-params EXECUTOR_GRAPH_LINK_PARAMS] [--executor-aot-workspace-byte-alignment EXECUTOR_AOT_WORKSPACE_BYTE_ALIGNMENT] [--executor-aot-unpacked-api EXECUTOR_AOT_UNPACKED_API] [--executor-aot-interface-api EXECUTOR_AOT_INTERFACE_API] [--executor-aot-link-params EXECUTOR_AOT_LINK_PARAMS] [--runtime RUNTIME] [--runtime-cpp-system-lib RUNTIME_CPP_SYSTEM_LIB] [--runtime-crt-system-lib RUNTIME_CRT_SYSTEM_LIB] [-v] [-O [0-3]] [--input-shapes INPUT_SHAPES] [--disabled-pass DISABLED_PASS] [--module-name MODULE_NAME] FILE positional arguments: FILE path to the input model file. optional arguments: -h, --help show this help message and exit --cross-compiler CROSS_COMPILER the cross compiler to generate target libraries, e.g. 'aarch64-linux-gnu-gcc'. --cross-compiler-options CROSS_COMPILER_OPTIONS the cross compiler options to generate target libraries, e.g. '-mfpu=neon-vfpv4'. --desired-layout {NCHW,NHWC} change the data layout of the whole graph. --dump-code FORMAT comma separated list of formats to export the input model, e.g. 'asm,ll,relay'. --model-format {keras,onnx,pb,tflite,pytorch,paddle} specify input model format. -o OUTPUT, --output OUTPUT output the compiled module to a specified archive. Defaults to 'module.tar'. -f {so,mlf}, --output-format {so,mlf} output format. Use 'so' for shared object or 'mlf' for Model Library Format (only for microTVM targets). Defaults to 'so'. --pass-config name=value configurations to be used at compile time. This option can be provided multiple times, each one to set one configuration value, e.g. '--pass-config relay.backend.use_auto_scheduler=0', e.g. '--pass- config tir.add_lower_pass=opt_level1,pass1,opt_level2,pass2'. --target TARGET compilation target as plain string, inline JSON or path to a JSON file --tuning-records PATH path to an auto-tuning log file by AutoTVM. If not presented, the fallback/tophub configs will be used. --executor EXECUTOR Executor to compile the model with --runtime RUNTIME Runtime to compile the model with -v, --verbose increase verbosity. -O [0-3], --opt-level [0-3] specify which optimization level to use. Defaults to '3'. --input-shapes INPUT_SHAPES specify non-generic shapes for model to run, format is "input_name:[dim1,dim2,...,dimn] input_name2:[dim1,dim2]". --disabled-pass DISABLED_PASS disable specific passes, comma-separated list of pass names. --module-name MODULE_NAME The output module name. Defaults to 'default'. target example_target_hook: --target-example_target_hook-from_device TARGET_EXAMPLE_TARGET_HOOK_FROM_DEVICE target example_target_hook from_device --target-example_target_hook-libs TARGET_EXAMPLE_TARGET_HOOK_LIBS target example_target_hook libs options --target-example_target_hook-model TARGET_EXAMPLE_TARGET_HOOK_MODEL target example_target_hook model string --target-example_target_hook-tag TARGET_EXAMPLE_TARGET_HOOK_TAG target example_target_hook tag string --target-example_target_hook-device TARGET_EXAMPLE_TARGET_HOOK_DEVICE target example_target_hook device string --target-example_target_hook-keys TARGET_EXAMPLE_TARGET_HOOK_KEYS target example_target_hook keys options target ext_dev: --target-ext_dev-from_device TARGET_EXT_DEV_FROM_DEVICE target ext_dev from_device --target-ext_dev-libs TARGET_EXT_DEV_LIBS target ext_dev libs options --target-ext_dev-model TARGET_EXT_DEV_MODEL target ext_dev model string --target-ext_dev-system-lib TARGET_EXT_DEV_SYSTEM_LIB target ext_dev system-lib --target-ext_dev-tag TARGET_EXT_DEV_TAG target ext_dev tag string --target-ext_dev-device TARGET_EXT_DEV_DEVICE target ext_dev device string --target-ext_dev-keys TARGET_EXT_DEV_KEYS target ext_dev keys options target llvm: --target-llvm-fast-math TARGET_LLVM_FAST_MATH target llvm fast-math --target-llvm-opt-level TARGET_LLVM_OPT_LEVEL target llvm opt-level --target-llvm-unpacked-api TARGET_LLVM_UNPACKED_API target llvm unpacked-api --target-llvm-from_device TARGET_LLVM_FROM_DEVICE target llvm from_device --target-llvm-fast-math-ninf TARGET_LLVM_FAST_MATH_NINF target llvm fast-math-ninf --target-llvm-mattr TARGET_LLVM_MATTR target llvm mattr options --target-llvm-num-cores TARGET_LLVM_NUM_CORES target llvm num-cores --target-llvm-libs TARGET_LLVM_LIBS target llvm libs options --target-llvm-fast-math-nsz TARGET_LLVM_FAST_MATH_NSZ target llvm fast-math-nsz --target-llvm-link-params TARGET_LLVM_LINK_PARAMS target llvm link-params --target-llvm-interface-api TARGET_LLVM_INTERFACE_API target llvm interface-api string --target-llvm-fast-math-contract TARGET_LLVM_FAST_MATH_CONTRACT target llvm fast-math-contract --target-llvm-system-lib TARGET_LLVM_SYSTEM_LIB target llvm system-lib --target-llvm-tag TARGET_LLVM_TAG target llvm tag string --target-llvm-mtriple TARGET_LLVM_MTRIPLE target llvm mtriple string --target-llvm-model TARGET_LLVM_MODEL target llvm model string --target-llvm-mfloat-abi TARGET_LLVM_MFLOAT_ABI target llvm mfloat-abi string --target-llvm-mcpu TARGET_LLVM_MCPU target llvm mcpu string --target-llvm-device TARGET_LLVM_DEVICE target llvm device string --target-llvm-runtime TARGET_LLVM_RUNTIME target llvm runtime string --target-llvm-fast-math-arcp TARGET_LLVM_FAST_MATH_ARCP target llvm fast-math-arcp --target-llvm-fast-math-reassoc TARGET_LLVM_FAST_MATH_REASSOC target llvm fast-math-reassoc --target-llvm-mabi TARGET_LLVM_MABI target llvm mabi string --target-llvm-keys TARGET_LLVM_KEYS target llvm keys options --target-llvm-fast-math-nnan TARGET_LLVM_FAST_MATH_NNAN target llvm fast-math-nnan target hybrid: --target-hybrid-from_device TARGET_HYBRID_FROM_DEVICE target hybrid from_device --target-hybrid-libs TARGET_HYBRID_LIBS target hybrid libs options --target-hybrid-model TARGET_HYBRID_MODEL target hybrid model string --target-hybrid-system-lib TARGET_HYBRID_SYSTEM_LIB target hybrid system-lib --target-hybrid-tag TARGET_HYBRID_TAG target hybrid tag string --target-hybrid-device TARGET_HYBRID_DEVICE target hybrid device string --target-hybrid-keys TARGET_HYBRID_KEYS target hybrid keys options target aocl: --target-aocl-from_device TARGET_AOCL_FROM_DEVICE target aocl from_device --target-aocl-libs TARGET_AOCL_LIBS target aocl libs options --target-aocl-model TARGET_AOCL_MODEL target aocl model string --target-aocl-system-lib TARGET_AOCL_SYSTEM_LIB target aocl system-lib --target-aocl-tag TARGET_AOCL_TAG target aocl tag string --target-aocl-device TARGET_AOCL_DEVICE target aocl device string --target-aocl-keys TARGET_AOCL_KEYS target aocl keys options target nvptx: --target-nvptx-max_num_threads TARGET_NVPTX_MAX_NUM_THREADS target nvptx max_num_threads --target-nvptx-thread_warp_size TARGET_NVPTX_THREAD_WARP_SIZE target nvptx thread_warp_size --target-nvptx-from_device TARGET_NVPTX_FROM_DEVICE target nvptx from_device --target-nvptx-libs TARGET_NVPTX_LIBS target nvptx libs options --target-nvptx-model TARGET_NVPTX_MODEL target nvptx model string --target-nvptx-system-lib TARGET_NVPTX_SYSTEM_LIB target nvptx system-lib --target-nvptx-mtriple TARGET_NVPTX_MTRIPLE target nvptx mtriple string --target-nvptx-tag TARGET_NVPTX_TAG target nvptx tag string --target-nvptx-mcpu TARGET_NVPTX_MCPU target nvptx mcpu string --target-nvptx-device TARGET_NVPTX_DEVICE target nvptx device string --target-nvptx-keys TARGET_NVPTX_KEYS target nvptx keys options target opencl: --target-opencl-max_num_threads TARGET_OPENCL_MAX_NUM_THREADS target opencl max_num_threads --target-opencl-thread_warp_size TARGET_OPENCL_THREAD_WARP_SIZE target opencl thread_warp_size --target-opencl-from_device TARGET_OPENCL_FROM_DEVICE target opencl from_device --target-opencl-libs TARGET_OPENCL_LIBS target opencl libs options --target-opencl-model TARGET_OPENCL_MODEL target opencl model string --target-opencl-system-lib TARGET_OPENCL_SYSTEM_LIB target opencl system-lib --target-opencl-tag TARGET_OPENCL_TAG target opencl tag string --target-opencl-device TARGET_OPENCL_DEVICE target opencl device string --target-opencl-keys TARGET_OPENCL_KEYS target opencl keys options target metal: --target-metal-max_num_threads TARGET_METAL_MAX_NUM_THREADS target metal max_num_threads --target-metal-thread_warp_size TARGET_METAL_THREAD_WARP_SIZE target metal thread_warp_size --target-metal-from_device TARGET_METAL_FROM_DEVICE target metal from_device --target-metal-libs TARGET_METAL_LIBS target metal libs options --target-metal-keys TARGET_METAL_KEYS target metal keys options --target-metal-model TARGET_METAL_MODEL target metal model string --target-metal-system-lib TARGET_METAL_SYSTEM_LIB target metal system-lib --target-metal-tag TARGET_METAL_TAG target metal tag string --target-metal-device TARGET_METAL_DEVICE target metal device string --target-metal-max_function_args TARGET_METAL_MAX_FUNCTION_ARGS target metal max_function_args target webgpu: --target-webgpu-max_num_threads TARGET_WEBGPU_MAX_NUM_THREADS target webgpu max_num_threads --target-webgpu-from_device TARGET_WEBGPU_FROM_DEVICE target webgpu from_device --target-webgpu-libs TARGET_WEBGPU_LIBS target webgpu libs options --target-webgpu-model TARGET_WEBGPU_MODEL target webgpu model string --target-webgpu-system-lib TARGET_WEBGPU_SYSTEM_LIB target webgpu system-lib --target-webgpu-tag TARGET_WEBGPU_TAG target webgpu tag string --target-webgpu-device TARGET_WEBGPU_DEVICE target webgpu device string --target-webgpu-keys TARGET_WEBGPU_KEYS target webgpu keys options target rocm: --target-rocm-max_num_threads TARGET_ROCM_MAX_NUM_THREADS target rocm max_num_threads --target-rocm-thread_warp_size TARGET_ROCM_THREAD_WARP_SIZE target rocm thread_warp_size --target-rocm-from_device TARGET_ROCM_FROM_DEVICE target rocm from_device --target-rocm-libs TARGET_ROCM_LIBS target rocm libs options --target-rocm-mattr TARGET_ROCM_MATTR target rocm mattr options --target-rocm-max_shared_memory_per_block TARGET_ROCM_MAX_SHARED_MEMORY_PER_BLOCK target rocm max_shared_memory_per_block --target-rocm-model TARGET_ROCM_MODEL target rocm model string --target-rocm-system-lib TARGET_ROCM_SYSTEM_LIB target rocm system-lib --target-rocm-mtriple TARGET_ROCM_MTRIPLE target rocm mtriple string --target-rocm-tag TARGET_ROCM_TAG target rocm tag string --target-rocm-device TARGET_ROCM_DEVICE target rocm device string --target-rocm-mcpu TARGET_ROCM_MCPU target rocm mcpu string --target-rocm-max_threads_per_block TARGET_ROCM_MAX_THREADS_PER_BLOCK target rocm max_threads_per_block --target-rocm-keys TARGET_ROCM_KEYS target rocm keys options target vulkan: --target-vulkan-max_num_threads TARGET_VULKAN_MAX_NUM_THREADS target vulkan max_num_threads --target-vulkan-thread_warp_size TARGET_VULKAN_THREAD_WARP_SIZE target vulkan thread_warp_size --target-vulkan-from_device TARGET_VULKAN_FROM_DEVICE target vulkan from_device --target-vulkan-max_per_stage_descriptor_storage_buffer TARGET_VULKAN_MAX_PER_STAGE_DESCRIPTOR_STORAGE_BUFFER target vulkan max_per_stage_descriptor_storage_buffer --target-vulkan-driver_version TARGET_VULKAN_DRIVER_VERSION target vulkan driver_version --target-vulkan-supports_16bit_buffer TARGET_VULKAN_SUPPORTS_16BIT_BUFFER target vulkan supports_16bit_buffer --target-vulkan-max_block_size_z TARGET_VULKAN_MAX_BLOCK_SIZE_Z target vulkan max_block_size_z --target-vulkan-libs TARGET_VULKAN_LIBS target vulkan libs options --target-vulkan-supports_dedicated_allocation TARGET_VULKAN_SUPPORTS_DEDICATED_ALLOCATION target vulkan supports_dedicated_allocation --target-vulkan-supported_subgroup_operations TARGET_VULKAN_SUPPORTED_SUBGROUP_OPERATIONS target vulkan supported_subgroup_operations --target-vulkan-mattr TARGET_VULKAN_MATTR target vulkan mattr options --target-vulkan-max_storage_buffer_range TARGET_VULKAN_MAX_STORAGE_BUFFER_RANGE target vulkan max_storage_buffer_range --target-vulkan-max_push_constants_size TARGET_VULKAN_MAX_PUSH_CONSTANTS_SIZE target vulkan max_push_constants_size --target-vulkan-supports_push_descriptor TARGET_VULKAN_SUPPORTS_PUSH_DESCRIPTOR target vulkan supports_push_descriptor --target-vulkan-supports_int64 TARGET_VULKAN_SUPPORTS_INT64 target vulkan supports_int64 --target-vulkan-supports_float32 TARGET_VULKAN_SUPPORTS_FLOAT32 target vulkan supports_float32 --target-vulkan-model TARGET_VULKAN_MODEL target vulkan model string --target-vulkan-max_block_size_x TARGET_VULKAN_MAX_BLOCK_SIZE_X target vulkan max_block_size_x --target-vulkan-system-lib TARGET_VULKAN_SYSTEM_LIB target vulkan system-lib --target-vulkan-max_block_size_y TARGET_VULKAN_MAX_BLOCK_SIZE_Y target vulkan max_block_size_y --target-vulkan-tag TARGET_VULKAN_TAG target vulkan tag string --target-vulkan-supports_int8 TARGET_VULKAN_SUPPORTS_INT8 target vulkan supports_int8 --target-vulkan-max_spirv_version TARGET_VULKAN_MAX_SPIRV_VERSION target vulkan max_spirv_version --target-vulkan-vulkan_api_version TARGET_VULKAN_VULKAN_API_VERSION target vulkan vulkan_api_version --target-vulkan-supports_8bit_buffer TARGET_VULKAN_SUPPORTS_8BIT_BUFFER target vulkan supports_8bit_buffer --target-vulkan-device_type TARGET_VULKAN_DEVICE_TYPE target vulkan device_type string --target-vulkan-supports_int32 TARGET_VULKAN_SUPPORTS_INT32 target vulkan supports_int32 --target-vulkan-device TARGET_VULKAN_DEVICE target vulkan device string --target-vulkan-max_threads_per_block TARGET_VULKAN_MAX_THREADS_PER_BLOCK target vulkan max_threads_per_block --target-vulkan-max_uniform_buffer_range TARGET_VULKAN_MAX_UNIFORM_BUFFER_RANGE target vulkan max_uniform_buffer_range --target-vulkan-driver_name TARGET_VULKAN_DRIVER_NAME target vulkan driver_name string --target-vulkan-supports_integer_dot_product TARGET_VULKAN_SUPPORTS_INTEGER_DOT_PRODUCT target vulkan supports_integer_dot_product --target-vulkan-supports_storage_buffer_storage_class TARGET_VULKAN_SUPPORTS_STORAGE_BUFFER_STORAGE_CLASS target vulkan supports_storage_buffer_storage_class --target-vulkan-supports_float16 TARGET_VULKAN_SUPPORTS_FLOAT16 target vulkan supports_float16 --target-vulkan-device_name TARGET_VULKAN_DEVICE_NAME target vulkan device_name string --target-vulkan-supports_float64 TARGET_VULKAN_SUPPORTS_FLOAT64 target vulkan supports_float64 --target-vulkan-keys TARGET_VULKAN_KEYS target vulkan keys options --target-vulkan-max_shared_memory_per_block TARGET_VULKAN_MAX_SHARED_MEMORY_PER_BLOCK target vulkan max_shared_memory_per_block --target-vulkan-supports_int16 TARGET_VULKAN_SUPPORTS_INT16 target vulkan supports_int16 target cuda: --target-cuda-max_num_threads TARGET_CUDA_MAX_NUM_THREADS target cuda max_num_threads --target-cuda-thread_warp_size TARGET_CUDA_THREAD_WARP_SIZE target cuda thread_warp_size --target-cuda-from_device TARGET_CUDA_FROM_DEVICE target cuda from_device --target-cuda-arch TARGET_CUDA_ARCH target cuda arch string --target-cuda-libs TARGET_CUDA_LIBS target cuda libs options --target-cuda-max_shared_memory_per_block TARGET_CUDA_MAX_SHARED_MEMORY_PER_BLOCK target cuda max_shared_memory_per_block --target-cuda-model TARGET_CUDA_MODEL target cuda model string --target-cuda-system-lib TARGET_CUDA_SYSTEM_LIB target cuda system-lib --target-cuda-tag TARGET_CUDA_TAG target cuda tag string --target-cuda-device TARGET_CUDA_DEVICE target cuda device string --target-cuda-mcpu TARGET_CUDA_MCPU target cuda mcpu string --target-cuda-max_threads_per_block TARGET_CUDA_MAX_THREADS_PER_BLOCK target cuda max_threads_per_block --target-cuda-registers_per_block TARGET_CUDA_REGISTERS_PER_BLOCK target cuda registers_per_block --target-cuda-keys TARGET_CUDA_KEYS target cuda keys options target sdaccel: --target-sdaccel-from_device TARGET_SDACCEL_FROM_DEVICE target sdaccel from_device --target-sdaccel-libs TARGET_SDACCEL_LIBS target sdaccel libs options --target-sdaccel-model TARGET_SDACCEL_MODEL target sdaccel model string --target-sdaccel-system-lib TARGET_SDACCEL_SYSTEM_LIB target sdaccel system-lib --target-sdaccel-tag TARGET_SDACCEL_TAG target sdaccel tag string --target-sdaccel-device TARGET_SDACCEL_DEVICE target sdaccel device string --target-sdaccel-keys TARGET_SDACCEL_KEYS target sdaccel keys options target composite: --target-composite-from_device TARGET_COMPOSITE_FROM_DEVICE target composite from_device --target-composite-libs TARGET_COMPOSITE_LIBS target composite libs options --target-composite-devices TARGET_COMPOSITE_DEVICES target composite devices options --target-composite-model TARGET_COMPOSITE_MODEL target composite model string --target-composite-tag TARGET_COMPOSITE_TAG target composite tag string --target-composite-device TARGET_COMPOSITE_DEVICE target composite device string --target-composite-keys TARGET_COMPOSITE_KEYS target composite keys options target stackvm: --target-stackvm-from_device TARGET_STACKVM_FROM_DEVICE target stackvm from_device --target-stackvm-libs TARGET_STACKVM_LIBS target stackvm libs options --target-stackvm-model TARGET_STACKVM_MODEL target stackvm model string --target-stackvm-system-lib TARGET_STACKVM_SYSTEM_LIB target stackvm system-lib --target-stackvm-tag TARGET_STACKVM_TAG target stackvm tag string --target-stackvm-device TARGET_STACKVM_DEVICE target stackvm device string --target-stackvm-keys TARGET_STACKVM_KEYS target stackvm keys options target aocl_sw_emu: --target-aocl_sw_emu-from_device TARGET_AOCL_SW_EMU_FROM_DEVICE target aocl_sw_emu from_device --target-aocl_sw_emu-libs TARGET_AOCL_SW_EMU_LIBS target aocl_sw_emu libs options --target-aocl_sw_emu-model TARGET_AOCL_SW_EMU_MODEL target aocl_sw_emu model string --target-aocl_sw_emu-system-lib TARGET_AOCL_SW_EMU_SYSTEM_LIB target aocl_sw_emu system-lib --target-aocl_sw_emu-tag TARGET_AOCL_SW_EMU_TAG target aocl_sw_emu tag string --target-aocl_sw_emu-device TARGET_AOCL_SW_EMU_DEVICE target aocl_sw_emu device string --target-aocl_sw_emu-keys TARGET_AOCL_SW_EMU_KEYS target aocl_sw_emu keys options target c: --target-c-unpacked-api TARGET_C_UNPACKED_API target c unpacked-api --target-c-from_device TARGET_C_FROM_DEVICE target c from_device --target-c-libs TARGET_C_LIBS target c libs options --target-c-constants-byte-alignment TARGET_C_CONSTANTS_BYTE_ALIGNMENT target c constants-byte-alignment --target-c-executor TARGET_C_EXECUTOR target c executor string --target-c-link-params TARGET_C_LINK_PARAMS target c link-params --target-c-model TARGET_C_MODEL target c model string --target-c-workspace-byte-alignment TARGET_C_WORKSPACE_BYTE_ALIGNMENT target c workspace-byte-alignment --target-c-system-lib TARGET_C_SYSTEM_LIB target c system-lib --target-c-tag TARGET_C_TAG target c tag string --target-c-interface-api TARGET_C_INTERFACE_API target c interface-api string --target-c-mcpu TARGET_C_MCPU target c mcpu string --target-c-device TARGET_C_DEVICE target c device string --target-c-runtime TARGET_C_RUNTIME target c runtime string --target-c-keys TARGET_C_KEYS target c keys options --target-c-march TARGET_C_MARCH target c march string target hexagon: --target-hexagon-from_device TARGET_HEXAGON_FROM_DEVICE target hexagon from_device --target-hexagon-libs TARGET_HEXAGON_LIBS target hexagon libs options --target-hexagon-mattr TARGET_HEXAGON_MATTR target hexagon mattr options --target-hexagon-model TARGET_HEXAGON_MODEL target hexagon model string --target-hexagon-llvm-options TARGET_HEXAGON_LLVM_OPTIONS target hexagon llvm-options options --target-hexagon-mtriple TARGET_HEXAGON_MTRIPLE target hexagon mtriple string --target-hexagon-system-lib TARGET_HEXAGON_SYSTEM_LIB target hexagon system-lib --target-hexagon-mcpu TARGET_HEXAGON_MCPU target hexagon mcpu string --target-hexagon-device TARGET_HEXAGON_DEVICE target hexagon device string --target-hexagon-tag TARGET_HEXAGON_TAG target hexagon tag string --target-hexagon-link-params TARGET_HEXAGON_LINK_PARAMS target hexagon link-params --target-hexagon-keys TARGET_HEXAGON_KEYS target hexagon keys options executor graph: --executor-graph-link-params EXECUTOR_GRAPH_LINK_PARAMS Executor graph link-params executor aot: --executor-aot-workspace-byte-alignment EXECUTOR_AOT_WORKSPACE_BYTE_ALIGNMENT Executor aot workspace-byte-alignment --executor-aot-unpacked-api EXECUTOR_AOT_UNPACKED_API Executor aot unpacked-api --executor-aot-interface-api EXECUTOR_AOT_INTERFACE_API Executor aot interface-api string --executor-aot-link-params EXECUTOR_AOT_LINK_PARAMS Executor aot link-params runtime cpp: --runtime-cpp-system-lib RUNTIME_CPP_SYSTEM_LIB Runtime cpp system-lib runtime crt: --runtime-crt-system-lib RUNTIME_CRT_SYSTEM_LIB Runtime crt system-lib ###Markdown 现在,模型的调谐数据已经收集完毕,可以使用优化的算子重新编译模型,以加快计算速度。 ###Code !python -m tvm.driver.tvmc compile \ --target "llvm" \ --tuning-records resnet50-v2-7-autotuner_records.json \ --output resnet50-v2-7-tvm_autotuned.tar \ ../../_models/resnet50-v2-7.onnx ###Output _____no_output_____ ###Markdown 验证优化后的模型是否运行并产生相同的结果: ###Code !python -m tvm.driver.tvmc run \ --inputs imagenet_cat.npz \ --output predictions.npz \ resnet50-v2-7-tvm_autotuned.tar !python postprocess.py ###Output class='n02123045 tabby, tabby cat' with probability=0.621104 class='n02123159 tiger cat' with probability=0.356378 class='n02124075 Egyptian cat' with probability=0.019712 class='n02129604 tiger, Panthera tigris' with probability=0.001215 class='n04040759 radiator' with probability=0.000262 ###Markdown 比较已调谐和未调谐的模型TVMC 提供了在模型之间进行基本性能基准测试的工具。你可以指定重复次数,并且 TVMC 报告模型的运行时间(与运行时间的启动无关)。可以粗略了解调谐对模型性能的改善程度。例如,在测试的英特尔 i7 系统上,看到调谐后的模型比未调谐的模型运行快 $47\%$。 ###Code !python -m tvm.driver.tvmc run \ --inputs imagenet_cat.npz \ --output predictions.npz \ --print-time \ --repeat 100 \ resnet50-v2-7-tvm_autotuned.tar !python -m tvm.driver.tvmc run \ --inputs imagenet_cat.npz \ --output predictions.npz \ --print-time \ --repeat 100 \ resnet50-v2-7-tvm.tar ###Output Execution time summary: mean (ms) median (ms) max (ms) min (ms) std (ms) 51.8327 52.5906 67.5374 42.9440 4.4040
notebooks/DSI_Research_Sentiment_Analysis_DKK_Semarang_(TF_IDF_&_GaussianNB).ipynb
###Markdown Import libraries ###Code import pandas as pd import matplotlib.pyplot as plt import string, re, requests, csv from google.colab import drive from wordcloud import WordCloud from gensim.corpora import WikiCorpus from nltk import word_tokenize, sent_tokenize import nltk from nltk.corpus import stopwords nltk.download('stopwords') nltk.download('punkt') ###Output [nltk_data] Downloading package stopwords to /root/nltk_data... [nltk_data] Unzipping corpora/stopwords.zip. [nltk_data] Downloading package punkt to /root/nltk_data... [nltk_data] Unzipping tokenizers/punkt.zip. ###Markdown Load dataset ###Code drive.mount('/content/drive') train = pd.read_csv('/content/drive/MyDrive/Data/train.csv') test = pd.read_csv('/content/drive/MyDrive/Data/test.csv') train.head() ###Output _____no_output_____ ###Markdown EDA Wordcloud ###Code # positive comments before preprocessing data_pos = train[train['label'] == 'positive'] all_text = ' '.join(word for word in data_pos['text']) wordcloud = WordCloud(colormap='Greens', width=1000, height=1000, mode='RGBA', background_color='white').generate(all_text) plt.figure(figsize=(20,10)) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.margins(x=0, y=0) plt.show() # negative comments before preprocessing data_neg = train[train['label'] == 'negative'] all_text = ' '.join(word for word in data_neg['text']) wordcloud = WordCloud(colormap='Reds', width=1000, height=1000, mode='RGBA', background_color='white').generate(all_text) plt.figure(figsize=(20,10)) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.margins(x=0, y=0) plt.show() # neutral comments before preprocessing data_neut = train[train['label'] == 'neutral'] all_text = ' '.join(word for word in data_neut['text']) wordcloud = WordCloud(colormap='Blues', width=1000, height=1000, mode='RGBA', background_color='white').generate(all_text) plt.figure(figsize=(20,10)) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.margins(x=0, y=0) plt.show() # value counts train['label'].value_counts() ###Output _____no_output_____ ###Markdown Preprocess ###Code train_text = train['text'] test_text = test['text'] # CLEANSING def cleansing(data): # lowercasing data = data.lower() # remove punctuation punct = string.punctuation translator = str.maketrans(punct, ' '*len(punct)) data = data.translate(translator) # remove ASCII dan unicode # data = data.encode('ascii', 'ignore').decode('utf-8') # data = re.sub(r'[^\x00-\x7f]',r'', data) # remove newline data = data.replace('\n', ' ') # remove digit pattern = r'[0-9]' data = re.sub(pattern, '', data) # remove extra space data = ' '.join(data.split()) return data import sys # REMOVE EMOJI # def remove_emoji(data): # emoji_pattern = re.compile("[" # u"\U0001F600-\U0001F64F" # emoticons # u"\U0001F300-\U0001F5FF" # symbols & pictographs # u"\U0001F680-\U0001F6FF" # transport & map symbols # u"\U0001F1E0-\U0001F1FF" # flags (iOS) # u"\U00002702-\U000027B0" # u"\U000024C2-\U0001F251" # "]+", flags=re.UNICODE) # return emoji_pattern.sub(r' ', data) # CONVERT EMOJIS import emoji import functools import operator import re df_emoji = pd.read_csv('/content/drive/MyDrive/Data/emoji_to_text.csv') UNICODE_EMO = {row['emoji']:row['makna'] for idx,row in df_emoji.iterrows()} def convert_emojis(text): # split emojis em_split_emoji = emoji.get_emoji_regexp().split(text) em_split_whitespace = [substr.split() for substr in em_split_emoji] em_split = functools.reduce(operator.concat, em_split_whitespace) text = ' '.join(em_split) # convert emojis for emot in UNICODE_EMO: text = re.sub(r'('+emot+')', "_".join(UNICODE_EMO[emot].replace(",","").replace(":","").split()), text) return text.lower() # CONSTRUCT KAMUS ALAY text_path1 = 'https://raw.githubusercontent.com/ramaprakoso/analisis-sentimen/master/kamus/kbba.txt' text_path2 = 'https://raw.githubusercontent.com/nasalsabila/kamus-alay/master/colloquial-indonesian-lexicon.csv' kamus_alay1 = pd.read_csv(text_path1, delimiter="\t", header=None, names=['slang', 'formal']) kamus_alay2 = pd.read_csv(text_path2) kamus_alay = pd.concat([kamus_alay1, kamus_alay2[['slang', 'formal']]]).reset_index(drop=True) dict_alay = dict() for index, row in kamus_alay.iterrows(): dict_alay[row['slang']] = row['formal'] # NORMALIZE COLLOQUIAL/ALAY def normalize_text(data): word_tokens = word_tokenize(data) result = [dict_alay.get(w,w) for w in word_tokens] return ' '.join(result) # CONSTRUCT STOPWORDS rama_stopword = "https://raw.githubusercontent.com/ramaprakoso/analisis-sentimen/master/kamus/stopword.txt" yutomo_stopword = "https://raw.githubusercontent.com/yasirutomo/python-sentianalysis-id/master/data/feature_list/stopwordsID.txt" fpmipa_stopword = "https://raw.githubusercontent.com/onlyphantom/elangdev/master/elang/word2vec/utils/stopwords-list/fpmipa-stopwords.txt" sastrawi_stopword = "https://raw.githubusercontent.com/onlyphantom/elangdev/master/elang/word2vec/utils/stopwords-list/sastrawi-stopwords.txt" aliakbar_stopword = "https://raw.githubusercontent.com/onlyphantom/elangdev/master/elang/word2vec/utils/stopwords-list/aliakbars-bilp.txt" pebahasa_stopword = "https://raw.githubusercontent.com/onlyphantom/elangdev/master/elang/word2vec/utils/stopwords-list/pebbie-pebahasa.txt" elang_stopword = "https://raw.githubusercontent.com/onlyphantom/elangdev/master/elang/word2vec/utils/stopwords-id.txt" nltk_stopword = stopwords.words('indonesian') path_stopwords = [rama_stopword, yutomo_stopword, fpmipa_stopword, sastrawi_stopword, aliakbar_stopword, pebahasa_stopword, elang_stopword] # CUSTOM STOPWORDS other = ''' admin mimin min minkes kalo nya username ''' # gabungkan stopwords stopwords_l = nltk_stopword for path in path_stopwords: response = requests.get(path) stopwords_l += response.text.split('\n') st_words = set(stopwords_l) other_stopword = set(other.split()) stop_words = st_words | other_stopword # REMOVE STOPWORDS def remove_stopword(text, stop_words=stop_words): word_tokens = word_tokenize(text) filtered_sentence = [w for w in word_tokens if not w in stop_words] return ' '.join(filtered_sentence) # full pipeline preprocess def preprocess(data): data = cleansing(data) # data = remove_emoji(data) data = convert_emojis(data) data = normalize_text(data) data = remove_stopword(data) return data # rename username to @username pattern = "(?:@)([A-Za-z0-9_](?:(?:[A-Za-z0-9_]|(?:\.(?!\.))){0,28}(?:[A-Za-z0-9_]))?)" train_text = train_text.apply(lambda x: re.sub(pattern, "@username", x)) test_text = test_text.apply(lambda x: re.sub(pattern, "@username", x)) # preprocess train_text = train_text.apply(lambda x: preprocess(x)) test_text = test_text.apply(lambda x: preprocess(x)) ###Output _____no_output_____ ###Markdown Feature extraction (TF-IDF) ###Code from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(min_df=2, max_df=0.95, max_features = 5000, ngram_range = (1, 3), sublinear_tf = True ) train_features = vectorizer.fit_transform(train_text) test_features = vectorizer.transform(test_text) train_features.toarray().shape ###Output _____no_output_____ ###Markdown Naive Bayes ###Code # mapping label mapper = {'neutral':0, 'positive':1, 'negative':2} train_y = train['label'].map(mapper) test_y = test['label'].map(mapper) train_y from sklearn.naive_bayes import GaussianNB from sklearn.model_selection import cross_val_score from sklearn.metrics import accuracy_score clf = GaussianNB() clf.fit( train_features.toarray(),train_y) #cross val score cross_val_score(clf, train_features.toarray(), train_y, cv=5) #predict y_pred=clf.predict(test_features.toarray()) #accuracy accuracy_score(test_y,y_pred) from joblib import dump, load dump(clf, 'tfidf gnb.joblib') ###Output _____no_output_____ ###Markdown GridSearch CV ###Code from sklearn.model_selection import GridSearchCV import numpy as np parameters = { 'var_smoothing': np.logspace(0,-9, num=100)} clf = GaussianNB() clf_grid = GridSearchCV(clf, parameters) clf_grid.fit(train_features.toarray(),train_y) clf_grid.best_params_ # cross-val score cross_val_score(clf_grid, train_features.toarray(), train_y, cv=5) # accuracy test y_pred = clf_grid.predict(test_features.toarray()) accuracy_score(test_y,y_pred) ###Output _____no_output_____
develop_language/python/doc/science_compute/ipynb_files/func_params.ipynb
###Markdown 1. 默认参数1.1 默认参数可以简化函数的调用。设置默认参数时,有几点要注意: 一是必选参数在前,默认参数在后,否则Python的解释器会报错(思考一下为什么默认参数不能放在必选参数前面); 二是如何设置默认参数。 当函数有多个参数时,把变化大的参数放前面,变化小的参数放后面。变化小的参数就可以作为默认参数。 使用默认参数有什么好处?最大的好处是能降低调用函数的难度。 1.2 默认参数很有用,但使用不当,也会掉坑里。默认参数有个最大的坑,演示如下: 先定义一个函数,传入一个 list,添加一个 END 再返回: ```pythondef add_end(L=[]): L.append('END') return L```当你正常调用时,结果似乎不错:```>>> add_end([1, 2, 3])[1, 2, 3, 'END']>>> add_end(['x', 'y', 'z'])['x', 'y', 'z', 'END']```当你使用默认参数调用时,一开始结果也是对的: ```>>> add_end()['END']```但是,再次调用add_end()时,结果就不对了:```>>> add_end()['END', 'END']>>> add_end()['END', 'END', 'END']```很多初学者很疑惑,默认参数是[],但是函数似乎每次都“记住了”上次添加了'END'后的list。 原因解释如下: Python函数在定义的时候,默认参数L的值就被计算出来了,即[],因为默认参数L也是一个变量,它指向对象[],每次调用该函数,如果改变了L的内容,则下次调用时,默认参数的内容就变了,不再是函数定义时的[]了。 **Note:** 定义默认参数要牢记一点:默认参数必须指向不变对象! 要修改上面的例子,我们可以用None这个不变对象来实现: ```pythondef add_end(L=None): if L is None: L = [] L.append('END') return L```现在,无论调用多少次,都不会有问题: ```>>> add_end()['END']>>> add_end()['END']```为什么要设计str、None这样的不变对象呢?因为不变对象一旦创建,对象内部的数据就不能修改,这样就减少了由于修改数据导致的错误。此外,由于对象不变,多任务环境下同时读取对象不需要加锁,同时读一点问题都没有。我们在编写程序时,如果可以设计一个不变对象,那就尽量设计成不变对象. ###Code def power(x, n=2): s = 1 while n > 0: n = n - 1 s = s * x return s print power(3) print power(3, 3) ###Output 9 27 ###Markdown 2. 可变参数 2.1 Python允许你在list或tuple前面加一个*号,把list或tuple的元素变成可变参数传进去.这种写法相当有用,而且很常见。 `*args`是可变参数,args接收的是一个tuple; ###Code def calc(*nums): sum = 0 for n in nums: sum += n*n return sum print calc(1,2,3) # 也可以这样调用: print calc(*[1,2,3]) ###Output 14 ###Markdown 3.关键字参数`**kw`是关键字参数,kw接收的是一个dict。 可变参数允许你传入0个或任意个参数,这些可变参数在函数调用时自动组装为一个tuple。 而关键字参数允许你传入0个或任意个含参数名的参数,这些关键字参数在函数内部自动组装为一个dict。请看示例: ```pythondef person(name, age, **kw): print('name:', name, 'age:', age, 'other:', kw)```函数person除了必选参数name和age外,还接受关键字参数kw。在调用该函数时,可以只传入必选参数: ```>>> person('Michael', 30)name: Michael age: 30 other: {}```也可以传入任意个数的关键字参数: ```>>> person('Bob', 35, city='Beijing')name: Bob age: 35 other: {'city': 'Beijing'}>>> person('Adam', 45, gender='M', job='Engineer')name: Adam age: 45 other: {'gender': 'M', 'job': 'Engineer'}```关键字参数有什么用?它可以扩展函数的功能。比如,在person函数里,我们保证能接收到name和age这两个参数,但是,如果调用者愿意提供更多的参数,我们也能收到。试想你正在做一个用户注册的功能,除了用户名和年龄是必填项外,其他都是可选项,利用关键字参数来定义这个函数就能满足注册的需求。 和可变参数类似,也可以先组装出一个dict,然后,把该dict转换为关键字参数传进去: ```>>> extra = {'city': 'Beijing', 'job': 'Engineer'}>>> person('Jack', 24, city=extra['city'], job=extra['job'])name: Jack age: 24 other: {'city': 'Beijing', 'job': 'Engineer'}```当然,上面复杂的调用可以用简化的写法: ```>>> extra = {'city': 'Beijing', 'job': 'Engineer'}>>> person('Jack', 24, **extra)name: Jack age: 24 other: {'city': 'Beijing', 'job': 'Engineer'}```**extra表示把extra这个dict的所有key-value用关键字参数传入到函数的**kw参数,kw将获得一个dict,注意kw获得的dict是extra的一份拷贝,对kw的改动不会影响到函数外的extra。 ###Code from __future__ import print_function print(help(print)) ###Output Help on built-in function print in module __builtin__: print(...) print(value, ..., sep=' ', end='\n', file=sys.stdout) Prints the values to a stream, or to sys.stdout by default. Optional keyword arguments: file: a file-like object (stream); defaults to the current sys.stdout. sep: string inserted between values, default a space. end: string appended after the last value, default a newline. None
jupyter-notebooks/dsdh/L5-deep-learning-diabetes-lr-mlp.ipynb
###Markdown The `diabetes.tab.txt` file is a tab-delimitted text file with the original unnormalized units (age in years, blood pressure in mmHg, gender as 1 for female, 2 for male.The `diabetes.rwrite1.txt` url will load a dataset with standardized values for each feature, but more informative column names. In both data sets, here is what the names mean.0. age: in years 1. sex: 1=male, 2=female 2. bmi: body mass index >35=obese, >30=overweight, <18.5=underweight 3. bp/map: mean arterial pressure (blood pressure, systolic+diastolic divided by 2) 4. s1/tc: TC level is a measure of a B12 transportation molecule that is not bound to B12 yet. I high ratio of holotranscobalamin (holo TC or TCH) to transcobalamin (TC) indicates healthy availability of B12 for absorption. TCH above 50 pmol/liter is considered good. 5. s2/ldl: Low density lipid (good cholesterol) 6. s3/hdl: High density Lipid cholesterol (bad cholesterol) 7. s4/tch: holo TC level? (B12 bound tot he transport molecule, **t**rans**c**obalamin, to create **h**olo**t**rans**c**obalamin), 50pmol/L=goodB12 8. s5/ltg: 9. s6/glu: glucose level 10. y: a quantitative measure of disease progression one year after baseline ###Code column_names = list(pd.read_csv('https://www4.stat.ncsu.edu/~boos/var.select/diabetes.rwrite1.txt', sep=' ').columns) df.columns = column_names df.head().round(1) column_names[3] = 'bp' column_names[-1] = 'severity' df.columns = column_names df.head() display(df.round(1)) target_names = ['severity'] feature_names = [name for name in df.columns if name not in target_names] print(f' target_names: {target_names}') print(f'feature_names: {feature_names}') fig = sns.pairplot(df, x_vars=feature_names[:4], y_vars=target_names) fig = sns.pairplot(df, x_vars=feature_names[4:7], y_vars=target_names) fig = sns.pairplot(df, x_vars=feature_names[7:], y_vars=target_names) ###Output _____no_output_____ ###Markdown Create a training and testset. The training set is like the question and answer pairs you get to see during a school lesson. The test set is like the exam question and answer pairs that the teacher grades you on at the end of the course. Use the training set to show your machine learning model the relationship between your features (age, gender, bmi, blood tests etc) and your target variables (diabetes severity). You will use the training set to traiin or fit the model. You'll use the test set to see how well you model will work (it's accuracy, standard error, precision, recall, etc) in the real world. You'll make predictions for the test set "questions" (`X_test`) and see how closely they match the test set answers (`y_test`). ###Code from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(df[feature_names], df[target_names], test_size=hyperparams['test_size']) X_train = pd.DataFrame(X_train, columns=feature_names) X_test = pd.DataFrame(X_test, columns=feature_names) y_train = pd.DataFrame(y_train, columns=target_names) y_test = pd.DataFrame(y_test, columns=target_names) print(f'X_train.shape: {X_train.shape}; y_train.shape: {y_train.shape}') print(f' X_test.shape: {X_test.shape}; y_test.shape: {y_test.shape}') display(X_train.describe(include='all')) display(y_train.describe(include='all')) from sklearn.neural_network import MLPRegressor regr = MLPRegressor(hidden_layer_sizes=(5, 2), max_iter=200000, random_state=0) # scikitlearn Multi-layer perceptrons expect a 1-D array (pd.DataFrame column or pd.Series) for y: regr.fit(X=X_train, y=y_train['severity']) for layer in range(len(regr.coefs_)): print(f'LAYER: {layer}' + (' (INPUT LAYER)' if layer == 0 else '')) W = regr.coefs_[layer] if layer > 0: w0 = regr.intercepts_[layer - 1] else: w0 = np.zeros(W.shape[0]) df_params = pd.concat([ pd.DataFrame(W, columns=[f'neuron_{layer}_{i}' for i in range(W.shape[1])]), pd.DataFrame(w0, columns=['intercept'])], axis=1) if layer is 0: df_params.index = feature_names else: df_params.index = range(1, len(df_params) + 1) display(df_params) print(f'LAYER: {layer+1} (OUTPUT LAYER)') display(regr.intercepts_[-1]) y_train.values # fig = sns.scatterplot(x=X_train[features], y=y_train[target_names[0]]) y_train_pred = regr.predict(X_train) df_y_train = pd.DataFrame(y_train_pred, columns=['pred_severity']) df_y_train['true_severity'] = y_train.values.flatten() df_y_train['residual'] = y_train_pred - y_train.values.flatten() # fig = plt.plot(X_train['bmi'].values.flatten(), , color='r', linewidth=2) # plt.xlabel('BMI') # plt.ylabel('Diabetes Severity') # plt.title(f'severity = {regr.coef_.round(2)[0]} * bmi + {regr.intercept_.round(2)}') # print(f'lr_bmi.intercept_: {lr_bmi.intercept_.round(2)}') # print(f' lr_bmi.coef_: {lr_bmi.coef_.round(2)}') df_y_train df_y_train.plot(kind='scatter', x='true_severity', y='residual') from sklearn.metrics import mean_squared_error y_test_pred = regr.predict(X_test[feature_names]) # print(f'y_test_pred.shape: {y_test_pred.shape}') mae_test_bmi, rmse_test_bmi = mae_rmse(y_test, y_test_pred) results[-1]['test_score'] = lr_bmi.score(X_test[features], y_test) results[-1]['test_rmse'] = rmse_test_bmi results[-1]['test_mae'] = mae_test_bmi display(pd.DataFrame(results).round(2)) features rmse_overfit_ratio = round((rmse_test_bmi - rmse_train_bmi) / rmse_test_bmi, 3) rmse_overfit_ratio score_overfit_ratio = round((results[-1]['train_score'] - results[-1]['test_score']) / results[-1]['test_score'], 3) score_overfit_ratio lr_multi = LinearRegression() features = feature_names lr_multi = lr_multi.fit(X_train, y_train) print(f'lr_multi.intercept_: {lr_multi.intercept_.round(2)}') print('lr_multi_coef:') lr_multi_coef = pd.Series(lr_multi.coef_[0], index=features) print(lr_multi_coef.round(2)) ###Output _____no_output_____ ###Markdown In the sex column a value of 2 indicates female and 1 indicates male. What does this coefficient list tell you about the affect that sex (gender) has on ones likelihood of developing diabetes? ###Code fig = sns.scatterplot(x=X_train['bmi'].values, y=y_train.values.flatten()) print(lr_multi.coef_) print(lr_multi.predict(X_train).flatten().shape) fig = sns.scatterplot(X_train['bmi'], lr_multi.predict(X_train).flatten(), color='r', linewidth=2) plt.xlabel('BMI') plt.ylabel('Diabetes Severity') # print(lr_multi_coef['age']) # print(lr_multi_coef['bmi']) plt.title(f'severity = ... + {lr_multi_coef["age"].round(2)}*age + {lr_multi_coef["bmi"].round(2)}*bmi + {lr_multi.intercept_.round(2)}') print(f'lr_multi.intercept_: {lr_multi.intercept_.round(2)}') print(' lr_multi_coef_:') print(lr_multi_coef.round(2)) ###Output _____no_output_____ ###Markdown Let's create a function that we can use to train a model and measure it's accuracy. API: Inputs: model: untrained sklearn model object (this will allow us to use any model we like) X_train, y_train: training set of features (X) and target labels (y), defaults to the entire original train_test_split X_test, y_test: test set to measure accuracy on an unseen dataset Outputs: Dictionary of model hyperparameters (class name, features), trained parameters (coefficients, intercept), and performance (RMSE, MAE, pierson correlation score) ###Code def fit_evaluate_model(model=LinearRegression(), X_train=X_train[features], y_train=y_train, X_test=X_test[features], y_test=y_test, hyperparams=hyperparams): features = list(X_train.columns) hyperparams.update({ 'model': model.__class__.__name__, 'num_features': len(features), 'num_samples': len(X_train) }) model = model.fit(X_train, np.array(y_train).reshape(-1,1)) print(f'model.intercept_: {model.intercept_.round(2)}') print('model_coef:') model_coef = pd.Series(model.coef_.flatten(), index=features) print(model_coef.round(2)) y_train_pred = model.predict(X_train[features]).flatten() e_train = (np.array(y_train_pred).flatten() - np.array(y_train).flatten()) mae_train = np.sum(np.abs(e_train)) / len(e_train) # .mean() rmse_train = np.sqrt((e_train ** 2).mean()) hyperparams.update({'train_score': model.score(X_train[features], y_train), 'train_rmse': rmse_train, 'train_mae': mae_train}) y_test_pred = model.predict(X_test[features]).flatten() e_test = y_test_pred - y_test.values[:,0] mae_test = np.sum(np.abs(e_test)) / len(e_test) # .mean() rmse_test = np.sqrt((e_test ** 2).mean()) hyperparams.update({'test_score': model.score(X_test[features], y_test), 'test_rmse': rmse_test, 'test_mae': mae_test}) results_series = pd.concat([pd.Series(hyperparams), model_coef]) return results_series lr_multi = LinearRegression() results.append(fit_evaluate_model(model=lr_multi)) e_train = lr_multi.predict(X_train).flatten() - np.array(y_train).flatten() fig = sns.scatterplot(x=np.array(y_train).flatten(), y=e_train) ylab = plt.ylabel('Error (pred_severity - true_severity)') xlab = plt.xlabel('True Severity') titl = plt.title('Residuals Plot') lr_en = ElasticNet() results.append(fit_evaluate_model(model=lr_en)) lr_lasso = Lasso() results.append(fit_evaluate_model(model=lr_lasso)) # results[-1]['description'] ='lasso on original features' df_results = pd.DataFrame(results).sort_values(['test_rmse']) df_results.round(2) X_train['ldl_x_hdl'] = X_train['ldl'] * X_train['hdl'] X_test['ldl_x_hdl'] = X_test['ldl'] * X_test['hdl'] lr_cholest = LinearRegression() results.append(fit_evaluate_model(model=lr_cholest, X_train=X_train, X_test=X_test)) df_results = pd.DataFrame(results).sort_values(['test_rmse']) df_results.round(2) X_train['ldl_d_hdl'] = X_train['ldl'] / X_train['hdl'] X_test['ldl_d_hdl'] = X_test['ldl'] / X_test['hdl'] lr_cholest = LinearRegression() results.append(fit_evaluate_model(model=lr_cholest, X_train=X_train, X_test=X_test)) df_results = pd.DataFrame(results).sort_values(['test_rmse']) df_results.round(2) from sklearn import preprocessing x_scaler = preprocessing.MinMaxScaler() x_scaler.fit(X_train) X_train_scaled = pd.DataFrame(x_scaler.transform(X_train), columns=X_train.columns, index=X_train.index) X_test_scaled = pd.DataFrame(x_scaler.transform(X_test), columns=X_test.columns, index=X_test.index) X_test_scaled.head() y_scaler = preprocessing.MinMaxScaler() y_scaler.fit(y_train) y_train_scaled = pd.DataFrame(y_scaler.transform(y_train), columns=y_train.columns, index=y_train.index) y_test_scaled = pd.DataFrame(y_scaler.transform(y_test), columns=y_test.columns, index=y_test.index) y_test_scaled.head() lr_scaled = LinearRegression() results.append(fit_evaluate_model(model=lr_scaled, X_train=X_train_scaled, X_test=X_test_scaled, y_train=y_train_scaled, y_test=y_test_scaled)) results[-1]['x_scaler'] = x_scaler.__class__.__name__ results[-1]['y_scaler'] = y_scaler.__class__.__name__ df_results = pd.DataFrame(results).sort_values(['test_score'], ascending=False) df_results.round(2) ###Output _____no_output_____ ###Markdown Check out the residuals plot for the best model we've created so far (model id=1 a LinearRegression with 10 unscaled features). Do you see any pattern in this error between your predictions and the truth? This will help you think of ways to engineer some new features. ###Code def plot_residuals(y, y_pred): e = np.array(y_pred).flatten() - np.array(y).flatten() fig = sns.scatterplot(x=np.array(y).flatten(), y=e) plt.ylabel('Error (pred_severity - true_severity)') plt.xlabel('True Severity') plt.title('Residuals Plot') return fig fig = plot_residuals(y_test, lr_multi.predict(X_test[features])) ###Output _____no_output_____ ###Markdown It seems that a LinearRegression overestimates severity for low severity patients. And it underestimates severity for patients that developer sever diabetes. You can square or take the exponent of our features to give the LinearRegression model curvature. ###Code (X_train ** 2).isnull().sum() X_train2 = pd.concat([X_train, X_train ** 2], axis=1) X_train2.columns = list(X_train.columns) + [c + '2' for c in X_train.columns] X_test2 = pd.concat([X_test, X_test ** 2], axis=1) X_test2.columns = list(X_test.columns) + [c + '2' for c in X_test.columns] lr_sqrd = LinearRegression() results.append(fit_evaluate_model(model=lr_sqrd, X_train=X_train2, X_test=X_test2, y_train=y_train, y_test=y_test)) df_results = pd.DataFrame(results).sort_values(['test_score'], ascending=False) df_results.round(2) lr_sqrd_lasso = Lasso() results.append(fit_evaluate_model(model=lr_sqrd_lasso, X_train=X_train2, X_test=X_test2, y_train=y_train, y_test=y_test)) df_results = pd.DataFrame(results).sort_values(['test_score'], ascending=False) df_results.round(2) lr_sqrd_elast = ElasticNet() results.append(fit_evaluate_model(model=lr_sqrd_elast, X_train=X_train2, X_test=X_test2, y_train=y_train, y_test=y_test)) results[-1]['alpha'] = lr_sqrd_elast.alpha df_results = pd.DataFrame(results).sort_values(['test_score'], ascending=False) df_results.round(2) ### BEGIN SOLUTION lr_sqrd_ridge = Ridge() results.append(fit_evaluate_model(model=lr_sqrd_ridge, X_train=X_train2, X_test=X_test2, y_train=y_train, y_test=y_test)) ### END SOLUTION results[-1]['alpha'] = lr_sqrd_ridge.alpha df_results = pd.DataFrame(results).sort_values(['test_score'], ascending=False) df_results.round(2) ### BEGIN SOLUTION lr_sqrd_ridge = Ridge(alpha=1000) results.append(fit_evaluate_model(model=lr_sqrd_ridge, X_train=X_train2, X_test=X_test2, y_train=y_train, y_test=y_test)) results[-1]['alpha'] = lr_sqrd_ridge.alpha ### END SOLUTION df_results = pd.DataFrame(results).sort_values(['test_score'], ascending=False) df_results.round(2) ### BEGIN SOLUTION lr_sqrd_lasso100 = Lasso(alpha=100) results.append(fit_evaluate_model(model=lr_sqrd_lasso100, X_train=X_train2, X_test=X_test2, y_train=y_train, y_test=y_test)) results[-1]['alpha'] = lr_sqrd_lasso100.alpha ### END SOLUTION df_results = pd.DataFrame(results).sort_values(['test_score'], ascending=False) df_results.round(3) df_results['overfitness'] = (df_results['train_score'] - df_results['test_score']) / df_results['train_score'] df_results.round(3) ###Output _____no_output_____
02 - strings.ipynb
###Markdown Strings ###Code s = 'Hello' type(s) s = 'hjhghghgKJHGIGKJHG987658 -- 765876587658765&^%*&^%*&^%' s s = "This is also a string" s s = 'it\'s a great day today' s s = 'many \tha\nks' s print(s) ###Output many ha ks ###Markdown printing strings \n new line\t tab ###Code a = 3 b = 4 print('a is') print(a) print(' and b is') print(b) print(f'a is {a} and b is {b}') print(f'a is {a} and b is {b}, and the sum of a and b is {a+b}') ###Output a is 3 and b is 4, and the sum of a and b is 7 ###Markdown string indexing ###Code s = 'Hello world' s[0] s[2] s[0:5] s[6:11] s[6:] s[:5] s[:] s[::3] s[::-1] s[-1] s[-2] ###Output _____no_output_____ ###Markdown string methods ###Code s s.upper() s s_upper = s.upper() s_upper s s.lower() split = s.split() split s split = s.split('wo') split ###Output _____no_output_____
examples/tutorials/16_Conditional_Generative_Adversarial_Networks.ipynb
###Markdown For this example, we will create a data distribution consisting of a set of ellipses in 2D, each with a random position, shape, and orientation. Each class corresponds to a different ellipse. Let's randomly generate the ellipses. ###Code import deepchem as dc import numpy as np import tensorflow as tf n_classes = 4 class_centers = np.random.uniform(-4, 4, (n_classes, 2)) class_transforms = [] for i in range(n_classes): xscale = np.random.uniform(0.5, 2) yscale = np.random.uniform(0.5, 2) angle = np.random.uniform(0, np.pi) m = [[xscale*np.cos(angle), -yscale*np.sin(angle)], [xscale*np.sin(angle), yscale*np.cos(angle)]] class_transforms.append(m) class_transforms = np.array(class_transforms) ###Output _____no_output_____ ###Markdown This function generates random data from the distribution. For each point it chooses a random class, then a random position in that class' ellipse. ###Code def generate_data(n_points): classes = np.random.randint(n_classes, size=n_points) r = np.random.random(n_points) angle = 2*np.pi*np.random.random(n_points) points = (r*np.array([np.cos(angle), np.sin(angle)])).T points = np.einsum('ijk,ik->ij', class_transforms[classes], points) points += class_centers[classes] return classes, points ###Output _____no_output_____ ###Markdown Let's plot a bunch of random points drawn from this distribution to see what it looks like. Points are colored based on their class label. ###Code %matplotlib inline import matplotlib.pyplot as plot classes, points = generate_data(1000) plot.scatter(x=points[:,0], y=points[:,1], c=classes) ###Output _____no_output_____ ###Markdown Now let's create the model for our CGAN. ###Code import deepchem.models.tensorgraph.layers as layers model = dc.models.TensorGraph(learning_rate=1e-4, use_queue=False) # Inputs to the model random_in = layers.Feature(shape=(None, 10)) # Random input to the generator generator_classes = layers.Feature(shape=(None, n_classes)) # The classes of the generated samples real_data_points = layers.Feature(shape=(None, 2)) # The training samples real_data_classes = layers.Feature(shape=(None, n_classes)) # The classes of the training samples is_real = layers.Weights(shape=(None, 1)) # Flags to distinguish real from generated samples # The generator gen_in = layers.Concat([random_in, generator_classes]) gen_dense1 = layers.Dense(30, in_layers=gen_in, activation_fn=tf.nn.relu) gen_dense2 = layers.Dense(30, in_layers=gen_dense1, activation_fn=tf.nn.relu) generator_points = layers.Dense(2, in_layers=gen_dense2) model.add_output(generator_points) # The discriminator all_points = layers.Concat([generator_points, real_data_points], axis=0) all_classes = layers.Concat([generator_classes, real_data_classes], axis=0) discrim_in = layers.Concat([all_points, all_classes]) discrim_dense1 = layers.Dense(30, in_layers=discrim_in, activation_fn=tf.nn.relu) discrim_dense2 = layers.Dense(30, in_layers=discrim_dense1, activation_fn=tf.nn.relu) discrim_prob = layers.Dense(1, in_layers=discrim_dense2, activation_fn=tf.sigmoid) ###Output _____no_output_____ ###Markdown We'll use different loss functions for training the generator and discriminator. The discriminator outputs its predictions in the form of a probability that each sample is a real sample (that is, that it came from the training set rather than the generator). Its loss consists of two terms. The first term tries to maximize the output probability for real data, and the second term tries to minimize the output probability for generated samples. The loss function for the generator is just a single term: it tries to maximize the discriminator's output probability for generated samples.For each one, we create a "submodel" specifying a set of layers that will be optimized based on a loss function. ###Code # Discriminator discrim_real_data_loss = -layers.Log(discrim_prob+1e-10) * is_real discrim_gen_data_loss = -layers.Log(1-discrim_prob+1e-10) * (1-is_real) discrim_loss = layers.ReduceMean(discrim_real_data_loss + discrim_gen_data_loss) discrim_submodel = model.create_submodel(layers=[discrim_dense1, discrim_dense2, discrim_prob], loss=discrim_loss) # Generator gen_loss = -layers.ReduceMean(layers.Log(discrim_prob+1e-10) * (1-is_real)) gen_submodel = model.create_submodel(layers=[gen_dense1, gen_dense2, generator_points], loss=gen_loss) ###Output _____no_output_____ ###Markdown Now to fit the model. Here are some important points to notice about the code.- We use `fit_generator()` to train only a single batch at a time, and we alternate between the discriminator and the generator. That way. both parts of the model improve together.- We only train the generator half as often as the discriminator. On this particular model, that gives much better results. You will often need to adjust `( of discriminator steps)/( of generator steps)` to get good results on a given problem.- We disable checkpointing by specifying `checkpoint_interval=0`. Since each call to `fit_generator()` includes only a single batch, it would otherwise save a checkpoint to disk after every batch, which would be very slow. If this were a real project and not just an example, we would want to occasionally call `model.save_checkpoint()` to write checkpoints at a reasonable interval. ###Code batch_size = model.batch_size discrim_error = [] gen_error = [] for step in range(20000): classes, points = generate_data(batch_size) class_flags = dc.metrics.to_one_hot(classes, n_classes) feed_dict={random_in: np.random.random((batch_size, 10)), generator_classes: class_flags, real_data_points: points, real_data_classes: class_flags, is_real: np.concatenate([np.zeros((batch_size,1)), np.ones((batch_size,1))])} discrim_error.append(model.fit_generator([feed_dict], submodel=discrim_submodel, checkpoint_interval=0)) if step%2 == 0: gen_error.append(model.fit_generator([feed_dict], submodel=gen_submodel, checkpoint_interval=0)) if step%1000 == 999: print(step, np.mean(discrim_error), np.mean(gen_error)) discrim_error = [] gen_error = [] ###Output WARNING:tensorflow:From /root/miniconda/lib/python3.6/site-packages/deepchem/models/tensorgraph/tensor_graph.py:714: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. WARNING:tensorflow:From /root/miniconda/lib/python3.6/site-packages/deepchem/models/tensorgraph/layers.py:1634: The name tf.log is deprecated. Please use tf.math.log instead. WARNING:tensorflow:From /root/miniconda/lib/python3.6/site-packages/deepchem/models/tensorgraph/tensor_graph.py:727: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. WARNING:tensorflow:From /root/miniconda/lib/python3.6/site-packages/deepchem/models/optimizers.py:76: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead. WARNING:tensorflow:From /root/miniconda/lib/python3.6/site-packages/deepchem/models/tensorgraph/tensor_graph.py:1012: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead. WARNING:tensorflow:From /root/miniconda/lib/python3.6/site-packages/deepchem/models/tensorgraph/tensor_graph.py:1012: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead. WARNING:tensorflow:From /root/miniconda/lib/python3.6/site-packages/deepchem/models/tensorgraph/tensor_graph.py:738: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead. WARNING:tensorflow:From /root/miniconda/lib/python3.6/site-packages/deepchem/models/tensorgraph/tensor_graph.py:748: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead. 999 0.5156213084459305 0.37282696121931075 1999 0.39635649234056475 0.6554632024765015 2999 0.4816185410916805 0.6439448493719101 3999 0.6881231372356414 0.41854076969623566 4999 0.6954806981682777 0.36900784534215925 5999 0.6934329395890236 0.34684676861763003 6999 0.6871857723593712 0.3469327309727669 7999 0.6882104944586754 0.35844097477197645 8999 0.6879851130247117 0.34883454167842864 9999 0.6891423400640487 0.3533225782513619 10999 0.6890938600897789 0.352202350795269 11999 0.6911078352332115 0.3480358254909515 12999 0.6913300577402115 0.34874600952863694 13999 0.6922475056052207 0.34867041957378386 14999 0.691593163728714 0.34903139680624007 15999 0.6911602554917335 0.35044702333211897 16999 0.6909645751714707 0.35226673740148545 17999 0.6911768457889557 0.3513581330180168 18999 0.6894893513917923 0.3482932530641556 19999 0.6915659754276275 0.35546432530879973 ###Markdown Have the trained model generate some data, and see how well it matches the training distribution we plotted before. ###Code classes, points = generate_data(1000) feed_dict = {random_in: np.random.random((1000, 10)), generator_classes: dc.metrics.to_one_hot(classes, n_classes)} gen_points = model.predict_on_generator([feed_dict]) plot.scatter(x=gen_points[:,0], y=gen_points[:,1], c=classes) ###Output _____no_output_____ ###Markdown Tutorial Part 16: Conditional Generative Adversarial NetworkA Generative Adversarial Network (GAN) is a type of generative model. It consists of two parts called the "generator" and the "discriminator". The generator takes random values as input and transforms them into an output that (hopefully) resembles the training data. The discriminator takes a set of samples as input and tries to distinguish the real training samples from the ones created by the generator. Both of them are trained together. The discriminator tries to get better and better at telling real from false data, while the generator tries to get better and better at fooling the discriminator.A Conditional GAN (CGAN) allows additional inputs to the generator and discriminator that their output is conditioned on. For example, this might be a class label, and the GAN tries to learn how the data distribution varies between classes. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/16_Conditional_Generative_Adversarial_Networks.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ###Code !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !pip install --pre deepchem import deepchem deepchem.__version__ ###Output _____no_output_____ ###Markdown For this example, we will create a data distribution consisting of a set of ellipses in 2D, each with a random position, shape, and orientation. Each class corresponds to a different ellipse. Let's randomly generate the ellipses. For each one we select a random center position, X and Y size, and rotation angle. We then create a transformation matrix that maps the unit circle to the ellipse. ###Code import deepchem as dc import numpy as np import tensorflow as tf n_classes = 4 class_centers = np.random.uniform(-4, 4, (n_classes, 2)) class_transforms = [] for i in range(n_classes): xscale = np.random.uniform(0.5, 2) yscale = np.random.uniform(0.5, 2) angle = np.random.uniform(0, np.pi) m = [[xscale*np.cos(angle), -yscale*np.sin(angle)], [xscale*np.sin(angle), yscale*np.cos(angle)]] class_transforms.append(m) class_transforms = np.array(class_transforms) ###Output _____no_output_____ ###Markdown This function generates random data from the distribution. For each point it chooses a random class, then a random position in that class' ellipse. ###Code def generate_data(n_points): classes = np.random.randint(n_classes, size=n_points) r = np.random.random(n_points) angle = 2*np.pi*np.random.random(n_points) points = (r*np.array([np.cos(angle), np.sin(angle)])).T points = np.einsum('ijk,ik->ij', class_transforms[classes], points) points += class_centers[classes] return classes, points ###Output _____no_output_____ ###Markdown Let's plot a bunch of random points drawn from this distribution to see what it looks like. Points are colored based on their class label. ###Code %matplotlib inline import matplotlib.pyplot as plot classes, points = generate_data(1000) plot.scatter(x=points[:,0], y=points[:,1], c=classes) ###Output _____no_output_____ ###Markdown Now let's create the model for our CGAN. DeepChem's GAN class makes this very easy. We just subclass it and implement a few methods. The two most important are:- `create_generator()` constructs a model implementing the generator. The model takes as input a batch of random noise plus any condition variables (in our case, the one-hot encoded class of each sample). Its output is a synthetic sample that is supposed to resemble the training data.- `create_discriminator()` constructs a model implementing the discriminator. The model takes as input the samples to evaluate (which might be either real training data or synthetic samples created by the generator) and the condition variables. Its output is a single number for each sample, which will be interpreted as the probability that the sample is real training data.In this case, we use very simple models. They just concatenate the inputs together and pass them through a few dense layers. Notice that the final layer of the discriminator uses a sigmoid activation. This ensures it produces an output between 0 and 1 that can be interpreted as a probability.We also need to implement a few methods that define the shapes of the various inputs. We specify that the random noise provided to the generator should consist of ten numbers for each sample; that each data sample consists of two numbers (the X and Y coordinates of a point in 2D); and that the conditional input consists of `n_classes` for each sample (the one-hot encoded class index). ###Code from tensorflow.keras.layers import Concatenate, Dense, Input class ExampleGAN(dc.models.GAN): def get_noise_input_shape(self): return (10,) def get_data_input_shapes(self): return [(2,)] def get_conditional_input_shapes(self): return [(n_classes,)] def create_generator(self): noise_in = Input(shape=(10,)) conditional_in = Input(shape=(n_classes,)) gen_in = Concatenate()([noise_in, conditional_in]) gen_dense1 = Dense(30, activation=tf.nn.relu)(gen_in) gen_dense2 = Dense(30, activation=tf.nn.relu)(gen_dense1) generator_points = Dense(2)(gen_dense2) return tf.keras.Model(inputs=[noise_in, conditional_in], outputs=[generator_points]) def create_discriminator(self): data_in = Input(shape=(2,)) conditional_in = Input(shape=(n_classes,)) discrim_in = Concatenate()([data_in, conditional_in]) discrim_dense1 = Dense(30, activation=tf.nn.relu)(discrim_in) discrim_dense2 = Dense(30, activation=tf.nn.relu)(discrim_dense1) discrim_prob = Dense(1, activation=tf.sigmoid)(discrim_dense2) return tf.keras.Model(inputs=[data_in, conditional_in], outputs=[discrim_prob]) gan = ExampleGAN(learning_rate=1e-4) ###Output _____no_output_____ ###Markdown Now to fit the model. We do this by calling `fit_gan()`. The argument is an iterator that produces batches of training data. More specifically, it needs to produces dicts that map all data inputs and conditional inputs to the values to use for them. In our case we can easily create as much random data as we need, so we define a generator that calls the `generate_data()` function defined above for each new batch. ###Code def iterbatches(batches): for i in range(batches): classes, points = generate_data(gan.batch_size) classes = dc.metrics.to_one_hot(classes, n_classes) yield {gan.data_inputs[0]: points, gan.conditional_inputs[0]: classes} gan.fit_gan(iterbatches(5000)) ###Output Ending global_step 999: generator average loss 0.87121, discriminator average loss 1.08472 Ending global_step 1999: generator average loss 0.968357, discriminator average loss 1.17393 Ending global_step 2999: generator average loss 0.710444, discriminator average loss 1.37858 Ending global_step 3999: generator average loss 0.699195, discriminator average loss 1.38131 Ending global_step 4999: generator average loss 0.694203, discriminator average loss 1.3871 TIMING: model fitting took 31.352 s ###Markdown Have the trained model generate some data, and see how well it matches the training distribution we plotted before. ###Code classes, points = generate_data(1000) one_hot_classes = dc.metrics.to_one_hot(classes, n_classes) gen_points = gan.predict_gan_generator(conditional_inputs=[one_hot_classes]) plot.scatter(x=gen_points[:,0], y=gen_points[:,1], c=classes) ###Output _____no_output_____ ###Markdown Tutorial Part 16: Conditional Generative Adversarial Network*Note: This example implements a GAN from scratch. The same model could be implemented much more easily with the `dc.models.GAN` class. See the MNIST GAN notebook for an example of using that class. It can still be useful to know how to implement a GAN from scratch for advanced situations that are beyond the scope of what the standard GAN class supports.*A Generative Adversarial Network (GAN) is a type of generative model. It consists of two parts called the "generator" and the "discriminator". The generator takes random values as input and transforms them into an output that (hopefully) resembles the training data. The discriminator takes a set of samples as input and tries to distinguish the real training samples from the ones created by the generator. Both of them are trained together. The discriminator tries to get better and better at telling real from false data, while the generator tries to get better and better at fooling the discriminator.A Conditional GAN (CGAN) allows additional inputs to the generator and discriminator that their output is conditioned on. For example, this might be a class label, and the GAN tries to learn how the data distribution varies between classes. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/16_Conditional_Generative_Adversarial_Networks.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ###Code %%capture %tensorflow_version 1.x !wget -c https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh !chmod +x Miniconda3-latest-Linux-x86_64.sh !bash ./Miniconda3-latest-Linux-x86_64.sh -b -f -p /usr/local !conda install -y -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.3.0 import sys sys.path.append('/usr/local/lib/python3.7/site-packages/') ###Output _____no_output_____ ###Markdown For this example, we will create a data distribution consisting of a set of ellipses in 2D, each with a random position, shape, and orientation. Each class corresponds to a different ellipse. Let's randomly generate the ellipses. ###Code import deepchem as dc import numpy as np import tensorflow as tf n_classes = 4 class_centers = np.random.uniform(-4, 4, (n_classes, 2)) class_transforms = [] for i in range(n_classes): xscale = np.random.uniform(0.5, 2) yscale = np.random.uniform(0.5, 2) angle = np.random.uniform(0, np.pi) m = [[xscale*np.cos(angle), -yscale*np.sin(angle)], [xscale*np.sin(angle), yscale*np.cos(angle)]] class_transforms.append(m) class_transforms = np.array(class_transforms) ###Output /usr/local/lib/python3.6/dist-packages/sklearn/externals/joblib/__init__.py:15: FutureWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+. warnings.warn(msg, category=FutureWarning) ###Markdown This function generates random data from the distribution. For each point it chooses a random class, then a random position in that class' ellipse. ###Code def generate_data(n_points): classes = np.random.randint(n_classes, size=n_points) r = np.random.random(n_points) angle = 2*np.pi*np.random.random(n_points) points = (r*np.array([np.cos(angle), np.sin(angle)])).T points = np.einsum('ijk,ik->ij', class_transforms[classes], points) points += class_centers[classes] return classes, points ###Output _____no_output_____ ###Markdown Let's plot a bunch of random points drawn from this distribution to see what it looks like. Points are colored based on their class label. ###Code %matplotlib inline import matplotlib.pyplot as plot classes, points = generate_data(1000) plot.scatter(x=points[:,0], y=points[:,1], c=classes) ###Output _____no_output_____ ###Markdown Now let's create the model for our CGAN. ###Code import deepchem.models.tensorgraph.layers as layers model = dc.models.TensorGraph(learning_rate=1e-4, use_queue=False) # Inputs to the model random_in = layers.Feature(shape=(None, 10)) # Random input to the generator generator_classes = layers.Feature(shape=(None, n_classes)) # The classes of the generated samples real_data_points = layers.Feature(shape=(None, 2)) # The training samples real_data_classes = layers.Feature(shape=(None, n_classes)) # The classes of the training samples is_real = layers.Weights(shape=(None, 1)) # Flags to distinguish real from generated samples # The generator gen_in = layers.Concat([random_in, generator_classes]) gen_dense1 = layers.Dense(30, in_layers=gen_in, activation_fn=tf.nn.relu) gen_dense2 = layers.Dense(30, in_layers=gen_dense1, activation_fn=tf.nn.relu) generator_points = layers.Dense(2, in_layers=gen_dense2) model.add_output(generator_points) # The discriminator all_points = layers.Concat([generator_points, real_data_points], axis=0) all_classes = layers.Concat([generator_classes, real_data_classes], axis=0) discrim_in = layers.Concat([all_points, all_classes]) discrim_dense1 = layers.Dense(30, in_layers=discrim_in, activation_fn=tf.nn.relu) discrim_dense2 = layers.Dense(30, in_layers=discrim_dense1, activation_fn=tf.nn.relu) discrim_prob = layers.Dense(1, in_layers=discrim_dense2, activation_fn=tf.sigmoid) ###Output _____no_output_____ ###Markdown We'll use different loss functions for training the generator and discriminator. The discriminator outputs its predictions in the form of a probability that each sample is a real sample (that is, that it came from the training set rather than the generator). Its loss consists of two terms. The first term tries to maximize the output probability for real data, and the second term tries to minimize the output probability for generated samples. The loss function for the generator is just a single term: it tries to maximize the discriminator's output probability for generated samples.For each one, we create a "submodel" specifying a set of layers that will be optimized based on a loss function. ###Code # Discriminator discrim_real_data_loss = -layers.Log(discrim_prob+1e-10) * is_real discrim_gen_data_loss = -layers.Log(1-discrim_prob+1e-10) * (1-is_real) discrim_loss = layers.ReduceMean(discrim_real_data_loss + discrim_gen_data_loss) discrim_submodel = model.create_submodel(layers=[discrim_dense1, discrim_dense2, discrim_prob], loss=discrim_loss) # Generator gen_loss = -layers.ReduceMean(layers.Log(discrim_prob+1e-10) * (1-is_real)) gen_submodel = model.create_submodel(layers=[gen_dense1, gen_dense2, generator_points], loss=gen_loss) ###Output _____no_output_____ ###Markdown Now to fit the model. Here are some important points to notice about the code.- We use `fit_generator()` to train only a single batch at a time, and we alternate between the discriminator and the generator. That way. both parts of the model improve together.- We only train the generator half as often as the discriminator. On this particular model, that gives much better results. You will often need to adjust `( of discriminator steps)/( of generator steps)` to get good results on a given problem.- We disable checkpointing by specifying `checkpoint_interval=0`. Since each call to `fit_generator()` includes only a single batch, it would otherwise save a checkpoint to disk after every batch, which would be very slow. If this were a real project and not just an example, we would want to occasionally call `model.save_checkpoint()` to write checkpoints at a reasonable interval. ###Code batch_size = model.batch_size discrim_error = [] gen_error = [] for step in range(20000): classes, points = generate_data(batch_size) class_flags = dc.metrics.to_one_hot(classes, n_classes) feed_dict={random_in: np.random.random((batch_size, 10)), generator_classes: class_flags, real_data_points: points, real_data_classes: class_flags, is_real: np.concatenate([np.zeros((batch_size,1)), np.ones((batch_size,1))])} discrim_error.append(model.fit_generator([feed_dict], submodel=discrim_submodel, checkpoint_interval=0)) if step%2 == 0: gen_error.append(model.fit_generator([feed_dict], submodel=gen_submodel, checkpoint_interval=0)) if step%1000 == 999: print(step, np.mean(discrim_error), np.mean(gen_error)) discrim_error = [] gen_error = [] ###Output WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/deepchem/models/tensorgraph/tensor_graph.py:714: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/deepchem/models/tensorgraph/layers.py:1634: The name tf.log is deprecated. Please use tf.math.log instead. WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/deepchem/models/tensorgraph/tensor_graph.py:727: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/deepchem/models/optimizers.py:76: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead. WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/deepchem/models/tensorgraph/tensor_graph.py:1012: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead. WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/deepchem/models/tensorgraph/tensor_graph.py:1012: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead. WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/deepchem/models/tensorgraph/tensor_graph.py:738: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead. WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/deepchem/models/tensorgraph/tensor_graph.py:748: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead. 999 0.5353480346798897 0.42010479140281676 1999 0.4970605837404728 0.5927191683650017 2999 0.32337378211319445 0.7835518234968185 3999 0.6442742827236653 0.44793607395887375 4999 0.7197044906020165 0.3560899528264999 5999 0.6842574814558029 0.35353413671255113 6999 0.681817068874836 0.35413561046123504 7999 0.6773326816558838 0.36418179589509964 8999 0.680501739025116 0.3696627883315086 9999 0.6853346248865128 0.35303945660591124 10999 0.6879503725767135 0.35497990638017657 11999 0.6917924422621727 0.3506708189845085 12999 0.6924710651636123 0.3495020810961723 13999 0.6911373255252838 0.3482625074982643 14999 0.6910281682610512 0.35186264622211455 15999 0.6905803002119064 0.3522126387357712 16999 0.6895883530378342 0.3526522752642631 17999 0.6900975884199142 0.35136979585886 18999 0.6905963387489319 0.34954379898309706 19999 0.6901651693582534 0.3524894942045212 ###Markdown Have the trained model generate some data, and see how well it matches the training distribution we plotted before. ###Code classes, points = generate_data(1000) feed_dict = {random_in: np.random.random((1000, 10)), generator_classes: dc.metrics.to_one_hot(classes, n_classes)} gen_points = model.predict_on_generator([feed_dict]) plot.scatter(x=gen_points[:,0], y=gen_points[:,1], c=classes) ###Output _____no_output_____ ###Markdown Tutorial Part 16: Conditional Generative Adversarial Network*Note: This example implements a GAN from scratch. The same model could be implemented much more easily with the `dc.models.GAN` class. See the MNIST GAN notebook for an example of using that class. It can still be useful to know how to implement a GAN from scratch for advanced situations that are beyond the scope of what the standard GAN class supports.*A Generative Adversarial Network (GAN) is a type of generative model. It consists of two parts called the "generator" and the "discriminator". The generator takes random values as input and transforms them into an output that (hopefully) resembles the training data. The discriminator takes a set of samples as input and tries to distinguish the real training samples from the ones created by the generator. Both of them are trained together. The discriminator tries to get better and better at telling real from false data, while the generator tries to get better and better at fooling the discriminator.A Conditional GAN (CGAN) allows additional inputs to the generator and discriminator that their output is conditioned on. For example, this might be a class label, and the GAN tries to learn how the data distribution varies between classes. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/16_Conditional_Generative_Adversarial_Networks.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ###Code !wget -c https://repo.anaconda.com/archive/Anaconda3-2019.10-Linux-x86_64.sh !chmod +x Anaconda3-2019.10-Linux-x86_64.sh !bash ./Anaconda3-2019.10-Linux-x86_64.sh -b -f -p /usr/local !conda install -y -c deepchem -c rdkit -c conda-forge -c omnia deepchem-gpu=2.3.0 import sys sys.path.append('/usr/local/lib/python3.7/site-packages/') ###Output _____no_output_____ ###Markdown For this example, we will create a data distribution consisting of a set of ellipses in 2D, each with a random position, shape, and orientation. Each class corresponds to a different ellipse. Let's randomly generate the ellipses. ###Code import deepchem as dc import numpy as np import tensorflow as tf n_classes = 4 class_centers = np.random.uniform(-4, 4, (n_classes, 2)) class_transforms = [] for i in range(n_classes): xscale = np.random.uniform(0.5, 2) yscale = np.random.uniform(0.5, 2) angle = np.random.uniform(0, np.pi) m = [[xscale*np.cos(angle), -yscale*np.sin(angle)], [xscale*np.sin(angle), yscale*np.cos(angle)]] class_transforms.append(m) class_transforms = np.array(class_transforms) ###Output _____no_output_____ ###Markdown This function generates random data from the distribution. For each point it chooses a random class, then a random position in that class' ellipse. ###Code def generate_data(n_points): classes = np.random.randint(n_classes, size=n_points) r = np.random.random(n_points) angle = 2*np.pi*np.random.random(n_points) points = (r*np.array([np.cos(angle), np.sin(angle)])).T points = np.einsum('ijk,ik->ij', class_transforms[classes], points) points += class_centers[classes] return classes, points ###Output _____no_output_____ ###Markdown Let's plot a bunch of random points drawn from this distribution to see what it looks like. Points are colored based on their class label. ###Code %matplotlib inline import matplotlib.pyplot as plot classes, points = generate_data(1000) plot.scatter(x=points[:,0], y=points[:,1], c=classes) ###Output _____no_output_____ ###Markdown Now let's create the model for our CGAN. ###Code import deepchem.models.tensorgraph.layers as layers model = dc.models.TensorGraph(learning_rate=1e-4, use_queue=False) # Inputs to the model random_in = layers.Feature(shape=(None, 10)) # Random input to the generator generator_classes = layers.Feature(shape=(None, n_classes)) # The classes of the generated samples real_data_points = layers.Feature(shape=(None, 2)) # The training samples real_data_classes = layers.Feature(shape=(None, n_classes)) # The classes of the training samples is_real = layers.Weights(shape=(None, 1)) # Flags to distinguish real from generated samples # The generator gen_in = layers.Concat([random_in, generator_classes]) gen_dense1 = layers.Dense(30, in_layers=gen_in, activation_fn=tf.nn.relu) gen_dense2 = layers.Dense(30, in_layers=gen_dense1, activation_fn=tf.nn.relu) generator_points = layers.Dense(2, in_layers=gen_dense2) model.add_output(generator_points) # The discriminator all_points = layers.Concat([generator_points, real_data_points], axis=0) all_classes = layers.Concat([generator_classes, real_data_classes], axis=0) discrim_in = layers.Concat([all_points, all_classes]) discrim_dense1 = layers.Dense(30, in_layers=discrim_in, activation_fn=tf.nn.relu) discrim_dense2 = layers.Dense(30, in_layers=discrim_dense1, activation_fn=tf.nn.relu) discrim_prob = layers.Dense(1, in_layers=discrim_dense2, activation_fn=tf.sigmoid) ###Output _____no_output_____ ###Markdown We'll use different loss functions for training the generator and discriminator. The discriminator outputs its predictions in the form of a probability that each sample is a real sample (that is, that it came from the training set rather than the generator). Its loss consists of two terms. The first term tries to maximize the output probability for real data, and the second term tries to minimize the output probability for generated samples. The loss function for the generator is just a single term: it tries to maximize the discriminator's output probability for generated samples.For each one, we create a "submodel" specifying a set of layers that will be optimized based on a loss function. ###Code # Discriminator discrim_real_data_loss = -layers.Log(discrim_prob+1e-10) * is_real discrim_gen_data_loss = -layers.Log(1-discrim_prob+1e-10) * (1-is_real) discrim_loss = layers.ReduceMean(discrim_real_data_loss + discrim_gen_data_loss) discrim_submodel = model.create_submodel(layers=[discrim_dense1, discrim_dense2, discrim_prob], loss=discrim_loss) # Generator gen_loss = -layers.ReduceMean(layers.Log(discrim_prob+1e-10) * (1-is_real)) gen_submodel = model.create_submodel(layers=[gen_dense1, gen_dense2, generator_points], loss=gen_loss) ###Output _____no_output_____ ###Markdown Now to fit the model. Here are some important points to notice about the code.- We use `fit_generator()` to train only a single batch at a time, and we alternate between the discriminator and the generator. That way. both parts of the model improve together.- We only train the generator half as often as the discriminator. On this particular model, that gives much better results. You will often need to adjust `( of discriminator steps)/( of generator steps)` to get good results on a given problem.- We disable checkpointing by specifying `checkpoint_interval=0`. Since each call to `fit_generator()` includes only a single batch, it would otherwise save a checkpoint to disk after every batch, which would be very slow. If this were a real project and not just an example, we would want to occasionally call `model.save_checkpoint()` to write checkpoints at a reasonable interval. ###Code batch_size = model.batch_size discrim_error = [] gen_error = [] for step in range(20000): classes, points = generate_data(batch_size) class_flags = dc.metrics.to_one_hot(classes, n_classes) feed_dict={random_in: np.random.random((batch_size, 10)), generator_classes: class_flags, real_data_points: points, real_data_classes: class_flags, is_real: np.concatenate([np.zeros((batch_size,1)), np.ones((batch_size,1))])} discrim_error.append(model.fit_generator([feed_dict], submodel=discrim_submodel, checkpoint_interval=0)) if step%2 == 0: gen_error.append(model.fit_generator([feed_dict], submodel=gen_submodel, checkpoint_interval=0)) if step%1000 == 999: print(step, np.mean(discrim_error), np.mean(gen_error)) discrim_error = [] gen_error = [] ###Output 999 1.55168287337 0.0408441077992 1999 1.09775845635 0.10187220595 2999 0.645102792382 0.287973743975 3999 0.680221937269 0.421649519399 4999 1.01530539939 0.21572620222 5999 0.355141538292 0.51490073061 6999 0.342691998512 0.522037278384 7999 0.668916338205 0.268597296476 8999 0.716327803612 0.262840693563 9999 0.713392020047 0.29249474436 10999 0.701064119875 0.309196961999 11999 0.69168697983 0.326060062766 12999 0.688140272975 0.335194783509 13999 0.687209348738 0.336962600589 14999 0.685669041574 0.343026666999 15999 0.686265172005 0.343531137526 16999 0.686260136843 0.346471596062 17999 0.687051311135 0.347908975482 18999 0.689062111676 0.344198456705 19999 0.691419941247 0.344754773915 ###Markdown Have the trained model generate some data, and see how well it matches the training distribution we plotted before. ###Code classes, points = generate_data(1000) feed_dict = {random_in: np.random.random((1000, 10)), generator_classes: dc.metrics.to_one_hot(classes, n_classes)} gen_points = model.predict_on_generator([feed_dict]) plot.scatter(x=gen_points[:,0], y=gen_points[:,1], c=classes) ###Output _____no_output_____ ###Markdown Tutorial Part 16: Conditional Generative Adversarial Network*Note: This example implements a GAN from scratch. The same model could be implemented much more easily with the `dc.models.GAN` class. See the MNIST GAN notebook for an example of using that class. It can still be useful to know how to implement a GAN from scratch for advanced situations that are beyond the scope of what the standard GAN class supports.*A Generative Adversarial Network (GAN) is a type of generative model. It consists of two parts called the "generator" and the "discriminator". The generator takes random values as input and transforms them into an output that (hopefully) resembles the training data. The discriminator takes a set of samples as input and tries to distinguish the real training samples from the ones created by the generator. Both of them are trained together. The discriminator tries to get better and better at telling real from false data, while the generator tries to get better and better at fooling the discriminator.A Conditional GAN (CGAN) allows additional inputs to the generator and discriminator that their output is conditioned on. For example, this might be a class label, and the GAN tries to learn how the data distribution varies between classes. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/16_Conditional_Generative_Adversarial_Networks.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ###Code !curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import conda_installer conda_installer.install() !/root/miniconda/bin/conda info -e !pip install --pre deepchem import deepchem deepchem.__version__ ###Output Requirement already satisfied: deepchem in /usr/local/lib/python3.6/dist-packages (2.4.0rc1.dev20200805144807) Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from deepchem) (1.4.1) Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from deepchem) (1.18.5) Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from deepchem) (0.22.2.post1) Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from deepchem) (0.16.0) Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from deepchem) (1.0.5) Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas->deepchem) (2.8.1) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->deepchem) (2018.9) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.6.1->pandas->deepchem) (1.15.0) ###Markdown For this example, we will create a data distribution consisting of a set of ellipses in 2D, each with a random position, shape, and orientation. Each class corresponds to a different ellipse. Let's randomly generate the ellipses. ###Code import deepchem as dc import numpy as np import tensorflow as tf n_classes = 4 class_centers = np.random.uniform(-4, 4, (n_classes, 2)) class_transforms = [] for i in range(n_classes): xscale = np.random.uniform(0.5, 2) yscale = np.random.uniform(0.5, 2) angle = np.random.uniform(0, np.pi) m = [[xscale*np.cos(angle), -yscale*np.sin(angle)], [xscale*np.sin(angle), yscale*np.cos(angle)]] class_transforms.append(m) class_transforms = np.array(class_transforms) ###Output _____no_output_____ ###Markdown This function generates random data from the distribution. For each point it chooses a random class, then a random position in that class' ellipse. ###Code def generate_data(n_points): classes = np.random.randint(n_classes, size=n_points) r = np.random.random(n_points) angle = 2*np.pi*np.random.random(n_points) points = (r*np.array([np.cos(angle), np.sin(angle)])).T points = np.einsum('ijk,ik->ij', class_transforms[classes], points) points += class_centers[classes] return classes, points ###Output _____no_output_____ ###Markdown Let's plot a bunch of random points drawn from this distribution to see what it looks like. Points are colored based on their class label. ###Code %matplotlib inline import matplotlib.pyplot as plot classes, points = generate_data(1000) plot.scatter(x=points[:,0], y=points[:,1], c=classes) ###Output _____no_output_____ ###Markdown Now let's create the model for our CGAN. ###Code # import deepchem.models.tensorgraph.layers as layers # model = dc.models.TensorGraph(learning_rate=1e-4, use_queue=False) # # Inputs to the model # random_in = layers.Feature(shape=(None, 10)) # Random input to the generator # generator_classes = layers.Feature(shape=(None, n_classes)) # The classes of the generated samples # real_data_points = layers.Feature(shape=(None, 2)) # The training samples # real_data_classes = layers.Feature(shape=(None, n_classes)) # The classes of the training samples # is_real = layers.Weights(shape=(None, 1)) # Flags to distinguish real from generated samples # # The generator # gen_in = layers.Concat([random_in, generator_classes]) # gen_dense1 = layers.Dense(30, in_layers=gen_in, activation_fn=tf.nn.relu) # gen_dense2 = layers.Dense(30, in_layers=gen_dense1, activation_fn=tf.nn.relu) # generator_points = layers.Dense(2, in_layers=gen_dense2) # model.add_output(generator_points) # # The discriminator # all_points = layers.Concat([generator_points, real_data_points], axis=0) # all_classes = layers.Concat([generator_classes, real_data_classes], axis=0) # discrim_in = layers.Concat([all_points, all_classes]) # discrim_dense1 = layers.Dense(30, in_layers=discrim_in, activation_fn=tf.nn.relu) # discrim_dense2 = layers.Dense(30, in_layers=discrim_dense1, activation_fn=tf.nn.relu) # discrim_prob = layers.Dense(1, in_layers=discrim_dense2, activation_fn=tf.sigmoid) ###Output _____no_output_____ ###Markdown We'll use different loss functions for training the generator and discriminator. The discriminator outputs its predictions in the form of a probability that each sample is a real sample (that is, that it came from the training set rather than the generator). Its loss consists of two terms. The first term tries to maximize the output probability for real data, and the second term tries to minimize the output probability for generated samples. The loss function for the generator is just a single term: it tries to maximize the discriminator's output probability for generated samples.For each one, we create a "submodel" specifying a set of layers that will be optimized based on a loss function. ###Code # # Discriminator # discrim_real_data_loss = -layers.Log(discrim_prob+1e-10) * is_real # discrim_gen_data_loss = -layers.Log(1-discrim_prob+1e-10) * (1-is_real) # discrim_loss = layers.ReduceMean(discrim_real_data_loss + discrim_gen_data_loss) # discrim_submodel = model.create_submodel(layers=[discrim_dense1, discrim_dense2, discrim_prob], loss=discrim_loss) # # Generator # gen_loss = -layers.ReduceMean(layers.Log(discrim_prob+1e-10) * (1-is_real)) # gen_submodel = model.create_submodel(layers=[gen_dense1, gen_dense2, generator_points], loss=gen_loss) ###Output _____no_output_____ ###Markdown Now to fit the model. Here are some important points to notice about the code.- We use `fit_generator()` to train only a single batch at a time, and we alternate between the discriminator and the generator. That way. both parts of the model improve together.- We only train the generator half as often as the discriminator. On this particular model, that gives much better results. You will often need to adjust `( of discriminator steps)/( of generator steps)` to get good results on a given problem.- We disable checkpointing by specifying `checkpoint_interval=0`. Since each call to `fit_generator()` includes only a single batch, it would otherwise save a checkpoint to disk after every batch, which would be very slow. If this were a real project and not just an example, we would want to occasionally call `model.save_checkpoint()` to write checkpoints at a reasonable interval. ###Code # batch_size = model.batch_size # discrim_error = [] # gen_error = [] # for step in range(20000): # classes, points = generate_data(batch_size) # class_flags = dc.metrics.to_one_hot(classes, n_classes) # feed_dict={random_in: np.random.random((batch_size, 10)), # generator_classes: class_flags, # real_data_points: points, # real_data_classes: class_flags, # is_real: np.concatenate([np.zeros((batch_size,1)), np.ones((batch_size,1))])} # discrim_error.append(model.fit_generator([feed_dict], # submodel=discrim_submodel, # checkpoint_interval=0)) # if step%2 == 0: # gen_error.append(model.fit_generator([feed_dict], # submodel=gen_submodel, # checkpoint_interval=0)) # if step%1000 == 999: # print(step, np.mean(discrim_error), np.mean(gen_error)) # discrim_error = [] # gen_error = [] ###Output _____no_output_____ ###Markdown Have the trained model generate some data, and see how well it matches the training distribution we plotted before. ###Code # classes, points = generate_data(1000) # feed_dict = {random_in: np.random.random((1000, 10)), # generator_classes: dc.metrics.to_one_hot(classes, n_classes)} # gen_points = model.predict_on_generator([feed_dict]) # plot.scatter(x=gen_points[:,0], y=gen_points[:,1], c=classes) ###Output _____no_output_____ ###Markdown Tutorial Part 16: Conditional Generative Adversarial Network*Note: This example implements a GAN from scratch. The same model could be implemented much more easily with the `dc.models.GAN` class. See the MNIST GAN notebook for an example of using that class. It can still be useful to know how to implement a GAN from scratch for advanced situations that are beyond the scope of what the standard GAN class supports.*A Generative Adversarial Network (GAN) is a type of generative model. It consists of two parts called the "generator" and the "discriminator". The generator takes random values as input and transforms them into an output that (hopefully) resembles the training data. The discriminator takes a set of samples as input and tries to distinguish the real training samples from the ones created by the generator. Both of them are trained together. The discriminator tries to get better and better at telling real from false data, while the generator tries to get better and better at fooling the discriminator.A Conditional GAN (CGAN) allows additional inputs to the generator and discriminator that their output is conditioned on. For example, this might be a class label, and the GAN tries to learn how the data distribution varies between classes. ColabThis tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/16_Conditional_Generative_Adversarial_Networks.ipynb) SetupTo run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment. ###Code %tensorflow_version 1.x !curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import deepchem_installer %time deepchem_installer.install(version='2.3.0') ###Output TensorFlow 1.x selected. % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3477 100 3477 0 0 7902 0 --:--:-- --:--:-- --:--:-- 7884
21 - Customer Analytics in Python/8_Modeling Purchase Incidence/2_Prepare the Dataset for Logistic Regression (1:22)/Purchase Analytics Predictive Analysis with Comments 9.2.ipynb
###Markdown Libraries ###Code import numpy as np import pandas as pd # We import the sk learn modules we'll need to segment our new data. We'll need scaler, pca and k-means. from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.cluster import KMeans # We import pickle in order to be able to load our pickled objects. import pickle # We import the Logistic Regression module from sk learn for the purchase probability model. from sklearn.linear_model import LogisticRegression #We import the necessary libraries for visualization. We set seaborn do be our default. import matplotlib.pyplot as plt import matplotlib.axes as axs import seaborn as sns sns.set() # We import the Linear Regression module from sk learn for the quantity model. from sklearn.linear_model import LinearRegression ###Output _____no_output_____ ###Markdown Data Preparation ###Code #load data df_purchase = pd.read_csv('purchase data.csv') # Import Scaler scaler = pickle.load(open('scaler.pickle', 'rb')) # Import PCA pca = pickle.load(open('pca.pickle', 'rb')) # Import K-Means kmeans_pca = pickle.load(open('kmeans_pca.pickle', 'rb')) # Standardization features = df_purchase[['Sex', 'Marital status', 'Age', 'Education', 'Income', 'Occupation', 'Settlement size']] df_purchase_segm_std = scaler.transform(features) # Apply PCA df_purchase_segm_pca = pca.transform(df_purchase_segm_std) # Segment data purchase_segm_kmeans_pca = kmeans_pca.predict(df_purchase_segm_pca) # Create a copy of the data frame df_purchase_predictors = df_purchase.copy() # Add segment labels df_purchase_predictors['Segment'] = purchase_segm_kmeans_pca segment_dummies = pd.get_dummies(purchase_segm_kmeans_pca, prefix = 'Segment', prefix_sep = '_') df_purchase_predictors = pd.concat([df_purchase_predictors, segment_dummies], axis = 1) df_pa = df_purchase_predictors ###Output _____no_output_____
RealData_Exp2/lockout_05.ipynb
###Markdown Lockout for Regression: Residential Building Data SetAuthor: Wilmer Arbelo Gonzalez Import all necessary libraries ###Code # Import libraries and modules import numpy as np import pandas as pd import xgboost as xgb from sklearn.metrics import r2_score, classification_report, confusion_matrix, \ roc_curve, roc_auc_score, plot_confusion_matrix, f1_score, \ balanced_accuracy_score, accuracy_score, mean_squared_error, \ log_loss from sklearn.datasets import make_friedman1 from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.linear_model import LogisticRegression, LinearRegression, SGDClassifier, \ Lasso, lasso_path from sklearn.preprocessing import StandardScaler, LabelBinarizer from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline from sklearn_pandas import DataFrameMapper import scipy from scipy import stats import os import sys import shutil from pathlib import Path import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.cm as cm import albumentations as A from albumentations.pytorch import ToTensorV2 import cv2 import itertools import time import tqdm import copy import torch import torchvision import torchvision.transforms as transforms import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision.models as models from torch.utils.data import Dataset import PIL import joblib import json # import mysgd ###Output _____no_output_____ ###Markdown Datasets and DataLoaders ###Code # Load data def load_data(fname): """ input (str or path): name of the folder with the data. This function reads the data, transforms it to tensors, and makes it global. """ global xtrain, xvalid, xtest, xtrain_valid, ytrain, yvalid, ytest, ytrain_valid # df = pd.read_csv(os.path.join(fname, 'xtrain.csv'), index_col=False) xtrain = torch.tensor(df.values).float() df = pd.read_csv(os.path.join(fname, 'xvalid.csv'), index_col=False) xvalid = torch.tensor(df.values).float() df = pd.read_csv(os.path.join(fname, 'xtest.csv'), index_col=False) xtest = torch.tensor(df.values).float() df = pd.read_csv(os.path.join(fname, 'xtrain_valid.csv'), index_col=False) xtrain_valid = torch.tensor(df.values).float() # df = pd.read_csv(os.path.join(fname, 'ytrain.csv'), index_col=False) ytrain = torch.tensor(df.values).float() df = pd.read_csv(os.path.join(fname, 'yvalid.csv'), index_col=False) yvalid = torch.tensor(df.values).float() df = pd.read_csv(os.path.join(fname, 'ytest.csv'), index_col=False) ytest = torch.tensor(df.values).float() df = pd.read_csv(os.path.join(fname, 'ytrain_valid.csv'), index_col=False) ytrain_valid = torch.tensor(df.values).float() # Transform DataFrame to Tensor def df_to_tensor(): """ This function transforms DataFrames to tensors and makes them global. """ global xtrain, xvalid, xtest, ytrain, yvalid, ytest # xtrain = torch.tensor(xtrain.values).float() xvalid = torch.tensor(xvalid.values).float() xtest = torch.tensor(xtest.values).float() # ytrain = torch.tensor(ytrain.values).float() yvalid = torch.tensor(yvalid.values).float() ytest = torch.tensor(ytest.values).float() ###Output _____no_output_____ ###Markdown Read and Clean Data ###Code # DATASET INFO: # Totally 105: # - 8 project physical and financial variables, # - 19 economic variables # - indices in 5 time lag numbers (5*19 = 95) # - two output variables that are construction costs and sale prices # Read Datasets df = pd.read_excel('dataset_05ResidentialBuilding/Residential_Building_Data_Set.xlsx', sheet_name = 'Data', engine = 'openpyxl', skiprows = 1) # Clean Datasets df.drop(columns=['START YEAR', 'START QUARTER', 'COMPLETION YEAR', 'COMPLETION QUARTER'], inplace=True) # .change column names cols = df.columns.tolist() for i in range(len(cols)): cols[i] = cols[i].lower() cols[i] = cols[i].replace('-', '_') cols[i] = cols[i].replace('.', '_') cols[i] = cols[i].strip() cols[i] = cols[i][:1] + cols[i][2:] df.columns = cols # .most of the cleaning cols_cat = [] cols_num = [] target = ['v9', 'v10'] for col in cols: if df.dtypes[col] == "object": df[col] = df[col].str.lower() df[col] = df[col].str.replace('-', '_') df[col] = df[col].str.replace('&', '_') df[col] = df[col].str.strip() df[col] = df[col].replace('?', np.nan) df[col] = df[col].astype("category") if col not in target: cols_cat.append(col) else: df[col] = df[col].astype(float) if col not in target: cols_num.append(col) print('- Categorical:') print(cols_cat, '\n') print('- Continuous:') print(cols_num, '\n') print('- Target:', target, '\n') print("- Dataset size: {} points\n".format(len(df))) print("- Missing values: {} points\n".format(df.isna().sum().sum())) ###Output - Categorical: [] - Continuous: ['v1', 'v2', 'v3', 'v4', 'v5', 'v6', 'v7', 'v8', 'v11', 'v12', 'v13', 'v14', 'v15', 'v16', 'v17', 'v18', 'v19', 'v20', 'v21', 'v22', 'v23', 'v24', 'v25', 'v26', 'v27', 'v28', 'v29', 'v11_1', 'v12_1', 'v13_1', 'v14_1', 'v15_1', 'v16_1', 'v17_1', 'v18_1', 'v19_1', 'v20_1', 'v21_1', 'v22_1', 'v23_1', 'v24_1', 'v25_1', 'v26_1', 'v27_1', 'v28_1', 'v29_1', 'v11_2', 'v12_2', 'v13_2', 'v14_2', 'v15_2', 'v16_2', 'v17_2', 'v18_2', 'v19_2', 'v20_2', 'v21_2', 'v22_2', 'v23_2', 'v24_2', 'v25_2', 'v26_2', 'v27_2', 'v28_2', 'v29_2', 'v11_3', 'v12_3', 'v13_3', 'v14_3', 'v15_3', 'v16_3', 'v17_3', 'v18_3', 'v19_3', 'v20_3', 'v21_3', 'v22_3', 'v23_3', 'v24_3', 'v25_3', 'v26_3', 'v27_3', 'v28_3', 'v29_3', 'v11_4', 'v12_4', 'v13_4', 'v14_4', 'v15_4', 'v16_4', 'v17_4', 'v18_4', 'v19_4', 'v20_4', 'v21_4', 'v22_4', 'v23_4', 'v24_4', 'v25_4', 'v26_4', 'v27_4', 'v28_4', 'v29_4'] - Target: ['v9', 'v10'] - Dataset size: 372 points - Missing values: 0 points ###Markdown Split and Save ###Code # # Random seeds for the splits # seed1 = torch.randint(0, 1000, (500,)) # seed2 = torch.randint(1001, 2000, (500,)) seed1 = pd.read_csv('seed1.csv', header=None).iloc[:,0].tolist() seed2 = pd.read_csv('seed2.csv', header=None).iloc[:,0].tolist() # Split data (function) def split_data(dfX, dfy, seed1=0, seed2=42): global xtrain, xvalid, xtest, xtrain_valid, ytrain, yvalid, ytest, ytrain_valid xtrain_valid, xtest, ytrain_valid, ytest = train_test_split(dfX, dfy, test_size=0.2, random_state=seed1) xtrain, xvalid, ytrain, yvalid = train_test_split(xtrain_valid, ytrain_valid, test_size=0.25, random_state=seed2) # Set (xtrain, ytrain), (xvalid, yvalid), and (xtest, ytest) data_folder = 'dataset_05ResidentialBuilding' X = df.drop(columns=target) y = df[target] split_data(X, y) # Save data on disk cols_all = xtrain.columns.tolist() xtrain.to_csv(os.path.join(data_folder, 'xtrain.csv'), index=False) xvalid.to_csv(os.path.join(data_folder, 'xvalid.csv'), index=False) xtest.to_csv(os.path.join(data_folder, 'xtest.csv'), index=False) xtrain_valid.to_csv(os.path.join(data_folder, 'xtrain_valid.csv'), index=False) X.to_csv(os.path.join(data_folder, 'X.csv'), index=False) ytrain.to_csv(os.path.join(data_folder, 'ytrain.csv'), index=False) yvalid.to_csv(os.path.join(data_folder, 'yvalid.csv'), index=False) ytest.to_csv(os.path.join(data_folder, 'ytest.csv'), index=False) ytrain_valid.to_csv(os.path.join(data_folder, 'ytrain_valid.csv'), index=False) y.to_csv(os.path.join(data_folder, 'y.csv'), index=False) # print("- xtrain size: {}".format(xtrain.shape)) print("- xvalid size: {}".format(xvalid.shape)) print("- xtest size: {}".format(xtest.shape)) print("- xtrain_valid size: {}".format(xtrain_valid.shape)) ###Output - xtrain size: (222, 103) - xvalid size: (75, 103) - xtest size: (75, 103) - xtrain_valid size: (297, 103) ###Markdown Create Datasets and DataLoaders ###Code # Create Dataset class class MyDataset(Dataset): ''' Input: xtensor (torch tensor): data points; dimension [# of points, # of features] ytensor (torch tensor): y values; dimension [# of points] Output: the usual... ''' def __init__(self, xtensor, ytensor): self.x = xtensor self.y = ytensor def __len__(self): return len(self.y) def __getitem__(self, idx): return self.x[idx,:], self.y[idx], idx # Instantiate DataLoaders def make_DataLoaders(target_idx=0, batch_size=100000, num_workers = 0): """ This function instantiate and makes global DataLoaders for training, validation, and testing datasets. """ # .create datasets train_dataset = MyDataset(xtrain, ytrain[:,target_idx]) valid_dataset = MyDataset(xvalid, yvalid[:,target_idx]) test_dataset = MyDataset(xtest, ytest[:,target_idx]) # .make dataloaders global variables global train_dataloader, valid_dataloader, test_dataloader train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size = batch_size, shuffle = True, num_workers=num_workers) valid_dataloader = torch.utils.data.DataLoader(valid_dataset, batch_size = batch_size, shuffle = True, num_workers=num_workers) test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size = batch_size, shuffle = False, num_workers=num_workers) print('- train minibatches =', len(train_dataloader)) print('- valid minibatches =', len(valid_dataloader)) print('- test minibatches =', len(test_dataloader)) # Select type of processor to be used device = torch.device("cuda" if torch.cuda.is_available() else "cpu") if device == torch.device('cuda'): print("-Type of precessor to be used: 'gpu'") !nvidia-smi else: print("-Type of precessor to be used: 'cpu'") # Choose device # torch.cuda.set_device(6) ###Output -Type of precessor to be used: 'cpu' ###Markdown Optimizer and Loss Function ###Code # Set up SGD optimizer with a constant learning rate def setup_SGD(model, lr): """ Args: model (nn.Module class): model previously instantiated lr (float): learning rate Returns: instantiated optimizer """ return optim.SGD(model.parameters(), lr=lr, momentum=0, weight_decay=0.0) # Set up Adam optimizer with a constant learning rate def setup_ADAM(model, lr): """ Args: model (nn.Module class): model previously instantiated lr (float): learning rate Returns: instantiated optimizer """ return optim.Adam(model.parameters(), lr=lr, weight_decay=0.0) # Type of Loss Function loss_type = nn.MSELoss(reduction='mean') ###Output _____no_output_____ ###Markdown Train model ###Code # PyTorch sign function def sgn(x): return torch.sign(x) # Train model (regression) def train_model(lr, features, layer_sizes, lock_flag=False, epochs=3000, early_stop=300, fname='model'): """ """ # Set seed of random generators (for reproducibility purposes) # torch.manual_seed(42) # Instantiate Model and Optimizer model = MyNet(features, layer_sizes) if lock_flag == True: model.load_state_dict(torch.load(model_forward_name)) model = model.to(device) optimizer = setup_SGD(model, lr) # LOCKDOWN SECTION 1: if lock_flag == True: n_iterations = epochs*len(train_dataloader) layers = get_lockdown_layers(model) t0 = [] t0_initial = [] sign_w0 = [] l = 0 for layer in layers: ww = layer.weight.detach() t0.append(abs(ww).sum()) sign_w0.append(torch.sign(ww)) t0_initial.append(t0[l]) l += 1 # DataFrames to store train/valid loss function train_loss = pd.DataFrame(columns=['iteration', 'loss']) valid_loss = pd.DataFrame(columns=['iteration', 'loss']) # Initialize some local variables... iteration = 0 best_iter = 0 best_loss = np.inf stop_flag = 0 # Loop Over Number of Epochs iterator = tqdm.notebook.tqdm(range(1, epochs + 1), desc='Epochs loop') for n_epoch in iterator: # # .Loop Over Mini-batches for ibatch, (xx, yy, _) in enumerate(train_dataloader, start=1): # # ..Compute Validation Loss model_dict = copy.deepcopy(model.state_dict()) epoch_loss = valid_epoch(valid_dataloader, model, loss_type, device) valid_loss = valid_loss.append({'iteration': iteration + 1, 'loss': epoch_loss}, ignore_index=True) # ..Train Mini-batch: model.train() xx = xx.to(device) yy = yy.to(device) yy_pred = model(xx) # ..Compute Train Loss loss = loss_type(yy_pred.view(-1), yy) train_loss = train_loss.append({'iteration': iteration + 1, 'loss': loss.item()}, ignore_index=True) # ..Backward Propagation optimizer.zero_grad() loss.backward() # ..LOCKDOWN SECTION 2: if lock_flag == True: l = 0 for layer in layers: # ...weights and gradients (2D) w2d = layer.weight.detach() g2d = layer.weight.grad.detach() w_shape = g2d.size() # ...flattens weights and compute P(W) w1d = torch.flatten(w2d) Pw = abs(w1d).sum() # ...find g=-grads, 'gamma', and sort gamma (in descending order) p1d = 1.0 g1d = -torch.flatten(g2d) gamma = abs(g1d)/(p1d + 1e-10) _, indx1d = torch.sort(gamma, descending=True) # ...Modify Gradients Accordingly: grmin = torch.zeros(w_shape).fill_(1e-1).to(device) layer.weight.grad = torch.sign(g2d)*torch.max(abs(g2d), grmin) gr1d = torch.flatten(layer.weight.grad.detach()) pjsj = lr*abs(gr1d[indx1d]) # ...sign(g) != sign(w) elements mask_ds = (sgn(g1d[indx1d]) != sgn(w1d[indx1d])) & (w1d[indx1d] != 0.0) DS_sum = pjsj[mask_ds].sum() pjsj[mask_ds] = 0.0 mask_dsc = ~mask_ds indx1d_dsc = indx1d[mask_dsc] # ...sum(pjsj), j={1,J-1} left_side = torch.cumsum(pjsj, dim=0) - pjsj pjsj_dsc = torch.zeros(len(g1d)).to(device) pjsj_dsc[:] = pjsj[:] # ...sum(pjsj) + Penalty, j={J+1,nJ} mask_w0 = (w1d[indx1d] == 0.0) pjsj_dsc[mask_w0] = 0.0 right_side = pjsj_dsc.sum() - torch.cumsum(pjsj_dsc, dim=0) + DS_sum - Pw + t0[l] # ...delta_j and new steps ds = right_side - left_side mask_w0 = (~mask_w0).float() pjsj_new = -mask_w0**(1-sgn(ds))*sgn(g1d[indx1d])*sgn(ds)*torch.min(pjsj, abs(ds)) # ...modify gradients indx2d = np.unravel_index(indx1d_dsc.cpu(), shape=w_shape) layer.weight.grad[indx2d] = pjsj_new[mask_dsc]/lr l = l + 1 # ..Update Weights optimizer.step() # ..LOCKDOWN SECTION 3: if lock_flag == True: # ...Set Weights That Change Signs to Zero l = 0 for layer in layers: ww = layer.weight.detach() sign_w = sgn(ww) mask0 = ((sign_w != sign_w0[l]) & (sign_w0[l] != 0.0)) with torch.no_grad(): layer.weight[mask0] = 0.0 sign_w0[l] = sign_w # ...Tighten Constraint if iteration%step == 0: t0[l] = (1.0 - iteration/n_iterations)*t0_initial[l] l = l + 1 # ..Early Stop batch_loss = valid_loss.iloc[iteration, 1] if batch_loss < best_loss: torch.save(model_dict, fname+"_best.pth") best_loss = batch_loss best_iter = iteration else: if iteration - best_iter >= early_stop: stop_flag = 1 break # iteration += 1 # if stop_flag == 1: break # if lock_flag == False: torch.save(model_dict, fname+"_last.pth") print("Summary:") print("-learning rate = {:.5f}".format(lr)) if stop_flag == 1: print('-path 1 has early stopped:') print(' {:d} iterations with no improvement in valid loss.'.format(early_stop)) print("-Model saved after iteration {:d}.".format(best_iter + 1)) print("-Train. Loss={:.7f}".format(train_loss.iloc[best_iter, 1])) print("-Valid. Loss={:.7f}".format(valid_loss.iloc[best_iter, 1])) # return train_loss, valid_loss # Evaluate one epoch (regression) def valid_epoch(data_loader, model, loss_type, device): """ Args: data_loader: torch DataLoader. model: torch model previously trained. device: 'gpu' or 'cpu'. Returns: Values of Loss Function after one epoch. """ # Initialize some local variables loss_fun = 0. n_points = 0 # Put model in evaluation mode model.eval() # Loop over mini batches (no gradients need to be computed) with torch.no_grad(): for i, (xx, yy, _) in enumerate(data_loader, start=1): xx = xx.to(device) yy = yy.to(device) yy_pred = model(xx) # .compute loss function batch_size = len(yy) loss = loss_type(yy_pred.view(-1), yy) loss_fun += batch_size*loss.item() # .number of points used after the ith mini-batch n_points += batch_size # return loss_fun/n_points ###Output _____no_output_____ ###Markdown Results: Lasso ###Code # Load data load_data('dataset_05ResidentialBuilding') # transform tensors to NumPy arrays xtrain = xtrain.numpy() xvalid = xvalid.numpy() xtest = xtest.numpy() ytrain = ytrain.numpy() yvalid = yvalid.numpy() ytest = ytest.numpy() # Normalize data scaler = StandardScaler() scaler.fit(xtrain) xtrain = scaler.transform(xtrain) xvalid = scaler.transform(xvalid) xtest = scaler.transform(xtest) scaler = StandardScaler() scaler.fit(ytrain) ytrain = scaler.transform(ytrain) yvalid = scaler.transform(yvalid) ytest = scaler.transform(ytest) ###Output _____no_output_____ ###Markdown Target 2 ###Code # Grid search space target_idx = 1 grid_alpha = np.geomspace(5e-5, 1.0, num=500) df_grid_loss = pd.DataFrame(columns = ['alpha', 'valid_mse', 'valid_r2'], index=range(len(grid_alpha))) params = {'max_iter': 100000, 'random_state': 42, 'warm_start': True} # Perform grid search target_idx = 1 data_out = 'data_lasso_05ResidentialBuilding' iterator = tqdm.notebook.tqdm(range(1, len(grid_alpha) + 1), desc='alpha-grid loop') for n in iterator: irow = n-1 params['alpha'] = grid_alpha[irow] log_reg = Lasso(**params).fit(xtrain, ytrain[:,target_idx]) ypred = log_reg.predict(xvalid) val_mse = mean_squared_error(yvalid[:,target_idx], ypred, squared=True) val_r2 = r2_score(yvalid[:,target_idx], ypred) df_grid_loss.iloc[irow,:] = [params['alpha'], val_mse, val_r2] print("alpha={:.5f} | valid_mse={:.5f}, valid_r2={:.5f}".format(params['alpha'], val_mse, val_r2)) # .save grid search results df_grid_loss.to_csv(os.path.join(data_out, 'df_grid_loss02.csv'), index=None) # Display grid search results data_out = 'data_lasso_05ResidentialBuilding' df_grid_loss = pd.read_csv(os.path.join(data_out, 'df_grid_loss02.csv')) idx = df_grid_loss.valid_mse.idxmin() best_alpha = df_grid_loss.iloc[idx, 0] best_mse = df_grid_loss.iloc[idx, 1] print("Best parameters:") print("- Best alpha = {:.5f}".format(best_alpha)) print("- Best valid loss = {}".format(best_mse)) # Plot Loss vs point in grid search data_out = 'data_lasso_05ResidentialBuilding' df_grid_loss = pd.read_csv(os.path.join(data_out, 'df_grid_loss02.csv')) fig, axes = plt.subplots(figsize=(10,6)) # axes.scatter(np.log10(df_grid_loss.alpha), df_grid_loss['valid_mse'], s=10) axes.plot(np.log10(df_grid_loss.alpha), df_grid_loss['valid_mse'], label='mse', linewidth=3) # axes.scatter(df_grid_loss.index, df_grid_loss['valid_r2']) # axes.plot(df_grid_loss.index, df_grid_loss['valid_r2'], label='r2') axes.set_xlabel("log10(alpha)") axes.set_ylabel("Validation metric") axes.legend() plt.show() # Train with best hyperparameters target_idx = 1 params['alpha'] = best_alpha log_reg = Lasso(**params).fit(xtrain, ytrain[:,target_idx]) # Save model and model's best params data_out = 'data_lasso_05ResidentialBuilding' joblib.dump(log_reg, os.path.join(data_out, 'lasso_model02.pkl')) print("model saved at: '{}'".format(os.path.join(data_out, 'lasso_model02.pkl'))) best_params = {} best_params['alpha'] = best_alpha with open(os.path.join(data_out, 'lasso_best_params02.json'), 'w') as f: json.dump(best_params, f) print("best params saved at: '{}'".format(os.path.join(data_out, 'lasso_best_params02.json'))) # Find MSE, R2, Accuracy, etc... target_idx = 1 data_out = 'data_lasso_05ResidentialBuilding' model = joblib.load(os.path.join(data_out, 'lasso_model02.pkl')) df_results = pd.DataFrame(index=['train', 'valid', 'test'], columns=['mse', 'r2']) ypred = model.predict(xtrain) df_results.loc['train', 'mse'] = mean_squared_error(ytrain[:,target_idx], ypred, squared=True) df_results.loc['train', 'r2'] = r2_score(ytrain[:,target_idx], ypred) ypred = model.predict(xvalid) df_results.loc['valid', 'mse'] = mean_squared_error(yvalid[:,target_idx], ypred, squared=True) df_results.loc['valid', 'r2'] = r2_score(yvalid[:,target_idx], ypred) ypred = model.predict(xtest) df_results.loc['test', 'mse'] = mean_squared_error(ytest[:,target_idx], ypred, squared=True) df_results.loc['test', 'r2'] = r2_score(ytest[:,target_idx], ypred) # Save data df_results.to_csv(os.path.join(data_out, 'results_target_lasso02.csv'), index=True) print("model results saved at: '{}'".format(os.path.join(data_out, 'results_target_lasso02.csv'))) df_results.head() # Find Accuracy +/- STD target_idx = 1 data_out = 'data_lasso_05ResidentialBuilding/partitions' df_results = pd.DataFrame(columns=['train_mse', 'valid_mse', 'test_mse', \ 'train_r2', 'valid_r2', 'test_r2']) model = joblib.load('data_lasso_05ResidentialBuilding/lasso_model02.pkl') partitions = 100 iterator = tqdm.notebook.tqdm(range(1, partitions + 1), desc='Partitions loop') for n in iterator: X = pd.read_csv('dataset_05ResidentialBuilding/X.csv') y = pd.read_csv('dataset_05ResidentialBuilding/y.csv') split_data(X, y, seed1=seed1[n], seed2=seed2[n]) # .normalize data sets scaler = StandardScaler() scaler.fit(xtrain) xtrain = scaler.transform(xtrain) xvalid = scaler.transform(xvalid) xtest = scaler.transform(xtest) scaler = StandardScaler() scaler.fit(ytrain) ytrain = scaler.transform(ytrain) yvalid = scaler.transform(yvalid) ytest = scaler.transform(ytest) # .train model model.fit(xtrain, ytrain[:,target_idx]) # .compute/save accuracy ypred = model.predict(xtrain) df_results.loc[n, 'train_mse'] = mean_squared_error(ytrain[:,target_idx], ypred, squared=True) df_results.loc[n, 'train_r2'] = r2_score(ytrain[:,target_idx], ypred) ypred = model.predict(xvalid) df_results.loc[n, 'valid_mse'] = mean_squared_error(yvalid[:,target_idx], ypred, squared=True) df_results.loc[n, 'valid_r2'] = r2_score(yvalid[:,target_idx], ypred) ypred = model.predict(xtest) df_results.loc[n, 'test_mse'] = mean_squared_error(ytest[:,target_idx], ypred, squared=True) df_results.loc[n, 'test_r2'] = r2_score(ytest[:,target_idx], ypred) joblib.dump(model, os.path.join(data_out, 'lasso_model02_'+str(n)+'.pkl')) df_results.to_csv('data_lasso_05ResidentialBuilding/accuracy_lasso02.csv', index=True) # Display results data_out = 'data_lasso_05ResidentialBuilding' accuracy = pd.read_csv(os.path.join(data_out, 'accuracy_lasso02.csv')) print("Train mse = {:.5f} +/- {:.5f}".format(accuracy['train_mse'].mean(), accuracy['train_mse'].std())) print("Valid mse = {:.5f} +/- {:.5f}".format(accuracy['valid_mse'].mean(), accuracy['valid_mse'].std())) print("Test mse = {:.5f} +/- {:.5f}".format(accuracy['test_mse'].mean(), accuracy['test_mse'].std())) print("") print("Train r2 = {:.5f} +/- {:.5f}".format(accuracy['train_r2'].mean(), accuracy['train_r2'].std())) print("Valid r2 = {:.5f} +/- {:.5f}".format(accuracy['valid_r2'].mean(), accuracy['valid_r2'].std())) print("Test r2 = {:.5f} +/- {:.5f}".format(accuracy['test_r2'].mean(), accuracy['test_r2'].std())) ###Output Train mse = 0.01752 +/- 0.00456 Valid mse = 0.04566 +/- 0.02597 Test mse = 0.04218 +/- 0.02413 Train r2 = 0.98248 +/- 0.00456 Valid r2 = 0.95696 +/- 0.01824 Test r2 = 0.95922 +/- 0.01951 ###Markdown Gradient Boosting ###Code # Load data load_data('dataset_05ResidentialBuilding') # transform tensors to NumPy arrays xtrain = xtrain.numpy() xvalid = xvalid.numpy() xtest = xtest.numpy() ytrain = ytrain.numpy() yvalid = yvalid.numpy() ytest = ytest.numpy() # Normalize data scaler = StandardScaler() scaler.fit(xtrain) xtrain = scaler.transform(xtrain) xvalid = scaler.transform(xvalid) xtest = scaler.transform(xtest) scaler = StandardScaler() scaler.fit(ytrain) ytrain = scaler.transform(ytrain) yvalid = scaler.transform(yvalid) ytest = scaler.transform(ytest) ###Output _____no_output_____ ###Markdown Target 2: xgb native api ###Code # Build DMatrices target_idx = 1 dtrain = xgb.DMatrix(data=xtrain, label=ytrain[:,target_idx], nthread=10, feature_names=cols_all) dvalid = xgb.DMatrix(data=xvalid, label=yvalid[:,target_idx], nthread=10, feature_names=cols_all) dtest = xgb.DMatrix(data=xtest, label=ytest[:,target_idx], nthread=10, feature_names=cols_all) valid_list = [(dtrain, 'train'), (dvalid, 'valid')] # Grid search space grid_eta = np.geomspace(0.01, 1.0, num=200) grid_depth = [1, 2, 3, 4, 5] grid_param = [(eta, max_depth) for eta in grid_eta for max_depth in grid_depth] param = [('eta', 0.01), ('max_depth', 2), ('objective', 'reg:squarederror'), ('nthread', 10), ('eval_metric', 'rmse') ] # Perform grid search data_out = 'data_xgb_05ResidentialBuilding' df_grid_loss = pd.DataFrame(columns = ['eta', 'max_depth', 'valid_loss'], index = range(len(grid_param))) i = 0 for eta, max_depth in grid_param: param[0] = ('eta', eta) param[1] = ('max_depth', max_depth) xgb_clf = xgb.train(param, dtrain, num_boost_round = 5000, evals = valid_list, early_stopping_rounds = 20, verbose_eval = False) print("eta={:.5f}, max_depth={} | valid_loss={:.5f} (iters={})".format(eta, max_depth, xgb_clf.best_score, xgb_clf.best_iteration)) df_grid_loss.iloc[i,:] = [eta, max_depth, xgb_clf.best_score] i += 1 # .save grid search results df_grid_loss.to_csv(os.path.join(data_out, 'df_grid_loss02.csv'), index=None) # Display grid search results data_out = 'data_xgb_05ResidentialBuilding' df_grid_loss = pd.read_csv(os.path.join(data_out, 'df_grid_loss02.csv')) idx = df_grid_loss.valid_loss.idxmin() best_eta = df_grid_loss.iloc[idx, 0] best_depth = df_grid_loss.iloc[idx, 1] best_loss = df_grid_loss.iloc[idx, 2] print("Best parameters:") print("- Best eta = {:.3f}".format(best_eta)) print("- Best max_depth = {}".format(best_depth)) print("- Best valid loss = {}".format(best_loss)) # Plot Loss vs point in grid search data_out = 'data_xgb_05ResidentialBuilding' df_grid_loss = pd.read_csv(os.path.join(data_out, 'df_grid_loss02.csv')) fig, axes = plt.subplots(figsize=(8,6)) axes.scatter(df_grid_loss.index, df_grid_loss['valid_loss']) axes.plot(df_grid_loss.index, df_grid_loss['valid_loss']) axes.set_xlabel("grid search point") axes.set_ylabel("Valid loss") plt.show() # Train with best hyperparameters param[0] = ('eta', best_eta) param[1] = ('max_depth', best_depth) evals_result = {} xgb_clf = xgb.train(param, dtrain, num_boost_round = 5000, evals = valid_list, early_stopping_rounds = 50, verbose_eval = True, evals_result=evals_result) # Plot of train/valid loss vs iter fig, axes = plt.subplots(figsize=(6,4)) axes.plot(evals_result['train']['rmse'], label="train") axes.plot(evals_result['valid']['rmse'], label="valid") axes.legend() axes.set_ylabel("loss") axes.set_xlabel("iteration") # axes.set_xticks(np.arange(0, len(evals_result['train']['logloss']), 1)) plt.show() # Save model and model's best params data_out = 'data_xgb_05ResidentialBuilding' xgb_clf.save_model(os.path.join(data_out, 'xgb_model02.json')) print("model saved at: '{}'".format(os.path.join(data_out, 'xgb_model02.json'))) best_params = {} best_params['best_eta'] = best_eta best_params['best_depth'] = int(best_depth) with open(os.path.join(data_out, 'xgb_best_params02.json'), 'w') as f: json.dump(best_params, f) print("best params saved at: '{}'".format(os.path.join(data_out, 'xgb_best_params02.json'))) # Save feature map def ceate_feature_map(features, data_out): f = open(os.path.join(data_out, 'xgb_model_fmap02.txt'), 'w') i = 0 for feat in features: f.write('{0}\t{1}\tq\n'.format(i, feat)) i = i + 1 f.close() data_out = 'data_xgb_05ResidentialBuilding' ceate_feature_map(cols_all, data_out) print("feature map saved at: '{}'".format(os.path.join(data_out, 'xgb_model_fmap02.txt'))) # Find MSE, R2, Accuracy, etc... target_idx = 1 data_out = 'data_xgb_05ResidentialBuilding' model = xgb.Booster(model_file=os.path.join(data_out, 'xgb_model02.json')) df_results = pd.DataFrame(index=['train', 'valid', 'test'], columns=['mse', 'r2']) ypred = model.predict(dtrain) df_results.loc['train', 'mse'] = mean_squared_error(ytrain[:,target_idx], ypred, squared=True) df_results.loc['train', 'r2'] = r2_score(ytrain[:,target_idx], ypred) ypred = model.predict(dvalid) df_results.loc['valid', 'mse'] = mean_squared_error(yvalid[:,target_idx], ypred, squared=True) df_results.loc['valid', 'r2'] = r2_score(yvalid[:,target_idx], ypred) ypred = model.predict(dtest) df_results.loc['test', 'mse'] = mean_squared_error(ytest[:,target_idx], ypred, squared=True) df_results.loc['test', 'r2'] = r2_score(ytest[:,target_idx], ypred) # Save data df_results.to_csv(os.path.join(data_out, 'results_target_xgb02.csv'), index=True) print("model results saved at: '{}'".format(os.path.join(data_out, 'results_target_xgb02.csv'))) df_results.head() # Find Accuracy +/- STD target_idx = 1 data_out = 'data_xgb_05ResidentialBuilding/patitions' df_results = pd.DataFrame(columns=['train_mse', 'valid_mse', 'test_mse', \ 'train_r2', 'valid_r2', 'test_r2']) with open('data_xgb_05ResidentialBuilding/xgb_best_params02.json') as f: best_params = json.load(f) param = [('eta', best_params['best_eta']), ('max_depth', best_params['best_depth']), ('objective', 'reg:squarederror'), ('nthread', 16), ('eval_metric', 'rmse') ] partitions = 100 iterator = tqdm.notebook.tqdm(range(1, partitions + 1), desc='Partitions loop') for n in iterator: X = pd.read_csv('dataset_05ResidentialBuilding/X.csv') y = pd.read_csv('dataset_05ResidentialBuilding/y.csv') split_data(X, y, seed1=seed1[n], seed2=seed2[n]) # .normalize data sets scaler = StandardScaler() scaler.fit(xtrain) xtrain = scaler.transform(xtrain) xvalid = scaler.transform(xvalid) xtest = scaler.transform(xtest) scaler = StandardScaler() scaler.fit(ytrain) ytrain = scaler.transform(ytrain) yvalid = scaler.transform(yvalid) ytest = scaler.transform(ytest) # .build DMatrices dtrain = xgb.DMatrix(data=xtrain, label=ytrain[:,target_idx], nthread=10) dvalid = xgb.DMatrix(data=xvalid, label=yvalid[:,target_idx], nthread=10) dtest = xgb.DMatrix(data=xtest, label=ytest[:,target_idx], nthread=10) valid_list = [(dtrain, 'train'), (dvalid, 'valid')] # .train model evals_result = {} model = xgb.train(param, dtrain, num_boost_round = 5000, evals = valid_list, early_stopping_rounds = 20, verbose_eval = False, evals_result=evals_result) # .compute/save accuracy ypred = model.predict(dtrain) df_results.loc[n, 'train_mse'] = mean_squared_error(ytrain[:,target_idx], ypred, squared=True) df_results.loc[n, 'train_r2'] = r2_score(ytrain[:,target_idx], ypred) ypred = model.predict(dvalid) df_results.loc[n, 'valid_mse'] = mean_squared_error(yvalid[:,target_idx], ypred, squared=True) df_results.loc[n, 'valid_r2'] = r2_score(yvalid[:,target_idx], ypred) ypred = model.predict(dtest) df_results.loc[n, 'test_mse'] = mean_squared_error(ytest[:,target_idx], ypred, squared=True) df_results.loc[n, 'test_r2'] = r2_score(ytest[:,target_idx], ypred) model.save_model(os.path.join(data_out, 'xgb_model02_'+str(n)+'.json')) df_results.to_csv('data_xgb_05ResidentialBuilding/accuracy_xgb02.csv', index=True) # Display results data_out = 'data_xgb_05ResidentialBuilding' accuracy_xgb = pd.read_csv(os.path.join(data_out, 'accuracy_xgb02.csv')) print("Train mse = {:.5f} +/- {:.5f}".format(accuracy_xgb['train_mse'].mean(), accuracy_xgb['train_mse'].std())) print("Valid mse = {:.5f} +/- {:.5f}".format(accuracy_xgb['valid_mse'].mean(), accuracy_xgb['valid_mse'].std())) print("Test mse = {:.5f} +/- {:.5f}".format(accuracy_xgb['test_mse'].mean(), accuracy_xgb['test_mse'].std())) print("") print("Train r2 = {:.5f} +/- {:.5f}".format(accuracy_xgb['train_r2'].mean(), accuracy_xgb['train_r2'].std())) print("Valid r2 = {:.5f} +/- {:.5f}".format(accuracy_xgb['valid_r2'].mean(), accuracy_xgb['valid_r2'].std())) print("Test r2 = {:.5f} +/- {:.5f}".format(accuracy_xgb['test_r2'].mean(), accuracy_xgb['test_r2'].std())) ###Output Train mse = 0.00358 +/- 0.00213 Valid mse = 0.05383 +/- 0.03860 Test mse = 0.05042 +/- 0.03253 Train r2 = 0.99642 +/- 0.00213 Valid r2 = 0.95134 +/- 0.02509 Test r2 = 0.95306 +/- 0.02131 ###Markdown Lockout ###Code # Normalize Data Set def normalize_data(): """ """ global xtrain, xvalid, xtest, ytrain, yvalid, ytest scaler = StandardScaler() scaler.fit(xtrain.numpy()) xtrain = torch.from_numpy(scaler.transform(xtrain.numpy())) xvalid = torch.from_numpy(scaler.transform(xvalid.numpy())) xtest = torch.from_numpy(scaler.transform(xtest.numpy())) scaler = StandardScaler() scaler.fit(ytrain.numpy()) ytrain = torch.from_numpy(scaler.transform(ytrain.numpy())) yvalid = torch.from_numpy(scaler.transform(yvalid.numpy())) ytest = torch.from_numpy(scaler.transform(ytest.numpy())) # Save output data def save_output(data_out, f1, f2, f3, new_folder=False): """ """ # Save relevant data if new_folder == True: dirs = os.listdir() if data_out in dirs: print("'{}' directory deleted.".format(data_out)) shutil.rmtree(data_out) print("'{}' directory created.\n".format(data_out)) os.mkdir(data_out) else: print("'{}' directory created.\n".format(data_out)) os.mkdir(data_out) # train_loss.to_csv(os.path.join(data_out, f1), index=False) valid_loss.to_csv(os.path.join(data_out, f2), index=False) print("'{}' saved.".format(f1)) print("'{}' saved.".format(f2)) for m in f3: shutil.move(m, os.path.join(data_out, m)) print("'{}' saved.".format(m)) ###Output _____no_output_____ ###Markdown Target 2: l1=5, l2=1; lockout=l1 ###Code # Set layers where lockdown is to be applied def get_lockdown_layers(model): layers = [model.classifier[0]] return layers # NN architecture with its corresponding forward method class MyNet(nn.Module): # .Network architecture def __init__(self, features, layer_sizes): super(MyNet, self).__init__() self.classifier = nn.Sequential( nn.Linear(features, layer_sizes[0], bias=True), nn.ReLU(inplace=True), nn.Linear(layer_sizes[0], layer_sizes[1], bias=True)#, # nn.ReLU(inplace=True), # nn.Linear(layer_sizes[1], layer_sizes[2], bias=True)#, # nn.ReLU(inplace=True), # nn.Linear(layer_sizes[2], layer_sizes[3], bias=True) ) # .Forward function def forward(self, x): x = self.classifier(x) return x # Grid search space target_idx = 0 grid_lrs = np.geomspace(5e-4, 1e-1, num=10) df_grid_loss = pd.DataFrame(columns = ['lr', 'valid_mse', 'valid_r2'], index=range(len(grid_lrs))) # Perform grid search (unconstrained) target_idx = 1 layer_sizes = [5, 1] epochs = 50000 data_in = 'dataset_05ResidentialBuilding' data_out = "data_unconstrained_05ResidentialBuilding/lrs" lock_flag = False # Read data load_data(data_in) # Normalize data normalize_data() # Create DataLoaders make_DataLoaders(target_idx=target_idx) features = xtrain.size(1) # Train model iterator = tqdm.notebook.tqdm(range(1, len(grid_lrs) + 1), desc='lr-grid loop') for n in iterator: irow = n-1 fname = 'model05_forward02_'+str(n) train_loss, valid_loss = train_model( grid_lrs[irow], features, layer_sizes, lock_flag = lock_flag, epochs=epochs, early_stop=epochs, fname=fname) print('\nBest train loss = {:.7f}\n'.format(train_loss['loss'].min())) # .save relevant data f3 = [fname+'_last.pth', fname+'_best.pth'] save_output(data_out, 'train_forward02_'+str(n)+'.csv', 'valid_forward02_'+str(n)+'.csv', f3) # .find MSE, R2, etc... mm = MyNet(features, layer_sizes) mm.load_state_dict(torch.load(os.path.join(data_out, fname+'_best.pth'))) mm = mm.to(device) mm.eval() ypred = mm(xvalid) mse = mean_squared_error(yvalid.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel(), squared=True) r2 = r2_score(yvalid.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel()) # .save grid search results df_grid_loss.iloc[irow,:] = [grid_lrs[irow], mse, r2] df_grid_loss.to_csv(os.path.join(data_out, 'df_grid_forward02.csv'), index=None) # Plot Loss vs point in grid search data_out = "data_unconstrained_05ResidentialBuilding/lrs" df_grid_loss = pd.read_csv(os.path.join(data_out, 'df_grid_forward02.csv')) fig, axes = plt.subplots(figsize=(10,6)) axes.scatter(np.log10(df_grid_loss.lr), df_grid_loss['valid_mse']) axes.plot(np.log10(df_grid_loss.lr), df_grid_loss['valid_mse'], label='mse') # axes.scatter(np.log10(df_grid_loss.lr), df_grid_loss['valid_r2']) # axes.plot(np.log10(df_grid_loss.lr), df_grid_loss['valid_r2'], label='r2') axes.set_xlabel("log10(lr)") axes.set_ylabel("Validation metric") axes.legend() plt.show() # Display grid search results data_out = "data_unconstrained_05ResidentialBuilding/lrs" df_grid_loss = pd.read_csv(os.path.join(data_out, 'df_grid_forward02.csv')) idx = df_grid_loss.valid_mse.idxmin() best_lr_fward = df_grid_loss.iloc[idx, 0] best_mse = df_grid_loss.iloc[idx, 1] print("Best parameters ({}):".format(idx+1)) print("- Best lr = {:.5f}".format(best_lr_fward)) print("- Best valid mse = {}".format(best_mse)) # Copy best model to main directory shutil.copy("data_unconstrained_05ResidentialBuilding/lrs/model05_forward02_"+str(idx+1)+"_best.pth", "data_unconstrained_05ResidentialBuilding/model05_forward02_best.pth") shutil.copy("data_unconstrained_05ResidentialBuilding/lrs/model05_forward02_"+str(idx+1)+"_last.pth", "data_unconstrained_05ResidentialBuilding/model05_forward02_last.pth") shutil.copy("data_unconstrained_05ResidentialBuilding/lrs/train_forward02_"+str(idx+1)+".csv", "data_unconstrained_05ResidentialBuilding/train_loss02.csv") shutil.copy("data_unconstrained_05ResidentialBuilding/lrs/valid_forward02_"+str(idx+1)+".csv", "data_unconstrained_05ResidentialBuilding/valid_loss02.csv") # Grid search space target_idx = 1 grid_lrs = np.geomspace(5e-4, 1e-1, num=10) df_grid_loss = pd.DataFrame(columns = ['lr', 'valid_mse', 'valid_r2'], index=range(len(grid_lrs))) # Perform grid search (Lockdown: path 2) target_idx = 1 layer_sizes = [5, 1] epochs = 20000 data_in = 'dataset_05ResidentialBuilding' data_out = "data_lockdown_05ResidentialBuilding/lrs" step = 1 lock_flag = True # Read data load_data(data_in) # Normalize data normalize_data() # Create DataLoaders make_DataLoaders(target_idx=target_idx) features = xtrain.size(1) # Train model iterator = tqdm.notebook.tqdm(range(1, len(grid_lrs) + 1), desc='lr-grid loop') for n in iterator: irow = n-1 model_forward_name = 'data_unconstrained_05ResidentialBuilding/lrs/model05_forward02_'+str(n)+'_last.pth' fname = 'model05_backward02_'+str(n) train_loss, valid_loss = train_model( grid_lrs[irow], features, layer_sizes, lock_flag = lock_flag, epochs=epochs, early_stop=epochs, fname=fname) print('\nBest train loss = {:.7f}\n'.format(train_loss['loss'].min())) # .save relevant data f3 = [fname+'_best.pth'] save_output(data_out, 'train_loss02_'+str(n)+'.csv', 'valid_loss02_'+str(n)+'.csv', f3) # .find MSE, R2, etc... mm = MyNet(features, layer_sizes) mm.load_state_dict(torch.load(os.path.join(data_out, fname+'_best.pth'))) mm = mm.to(device) mm.eval() ypred = mm(xvalid) mse = mean_squared_error(yvalid.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel(), squared=True) r2 = r2_score(yvalid.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel()) # .save grid search results df_grid_loss.iloc[irow,:] = [grid_lrs[irow], mse, r2] df_grid_loss.to_csv(os.path.join(data_out, 'df_grid_backward02.csv'), index=None) # Plot Loss vs point in grid search data_out = "data_lockdown_05ResidentialBuilding/lrs" df_grid_loss = pd.read_csv(os.path.join(data_out, 'df_grid_backward02.csv')) fig, axes = plt.subplots(figsize=(10,6)) axes.scatter(np.log10(df_grid_loss.lr), df_grid_loss['valid_mse']) axes.plot(np.log10(df_grid_loss.lr), df_grid_loss['valid_mse'], label='mse') # axes.scatter(np.log10(df_grid_loss.lr), df_grid_loss['valid_r2']) # axes.plot(np.log10(df_grid_loss.lr), df_grid_loss['valid_r2'], label='r2') axes.set_xlabel("log10(lr)") axes.set_ylabel("Validation metric") axes.legend() plt.show() # Display grid search results data_out = "data_lockdown_05ResidentialBuilding/lrs" df_grid_loss = pd.read_csv(os.path.join(data_out, 'df_grid_backward02.csv')) idx = df_grid_loss.valid_mse.idxmin() best_lr_bward = df_grid_loss.iloc[idx, 0] best_mse = df_grid_loss.iloc[idx, 1] print("Best parameters ({}):".format(idx+1)) print("- Best lr = {:.5f}".format(best_lr_bward)) print("- Best valid mse = {}".format(best_mse)) # Copy best models to main directory shutil.copy("data_unconstrained_05ResidentialBuilding/lrs/model05_forward02_"+str(idx+1)+"_best.pth", "data_lockdown_05ResidentialBuilding/model05_fward02_best.pth") shutil.copy("data_unconstrained_05ResidentialBuilding/lrs/model05_forward02_"+str(idx+1)+"_last.pth", "data_lockdown_05ResidentialBuilding/model05_fward02_last.pth") shutil.copy("data_lockdown_05ResidentialBuilding/lrs/model05_backward02_"+str(idx+1)+"_best.pth", "data_lockdown_05ResidentialBuilding/model05_backward02_best.pth") shutil.copy("data_unconstrained_05ResidentialBuilding/lrs/train_forward02_"+str(idx+1)+".csv", "data_lockdown_05ResidentialBuilding/train_loss_fward02.csv") shutil.copy("data_unconstrained_05ResidentialBuilding/lrs/valid_forward02_"+str(idx+1)+".csv", "data_lockdown_05ResidentialBuilding/valid_loss_fward02.csv") shutil.copy("data_lockdown_05ResidentialBuilding/lrs/train_loss02_"+str(idx+1)+".csv", "data_lockdown_05ResidentialBuilding/train_loss02.csv") shutil.copy("data_lockdown_05ResidentialBuilding/lrs/valid_loss02_"+str(idx+1)+".csv", "data_lockdown_05ResidentialBuilding/valid_loss02.csv") # Find MSE, R2, etc... target_idx = 1 data_out = "data_lockdown_05ResidentialBuilding" index = pd.MultiIndex.from_product([['TRAIN', 'VALIDATION', 'TEST'], ['forward', 'lockdown']]) df_results = pd.DataFrame(index=index, columns=['MSE', 'R2']) # Unconstrained results mm = MyNet(features, layer_sizes) mm.load_state_dict(torch.load('data_unconstrained_05ResidentialBuilding/model05_forward02_best.pth')) mm = mm.to(device) mm.eval() ypred = mm(xtrain) mse = mean_squared_error(ytrain[:,target_idx].detach().numpy(), ypred.detach().numpy()) r2 = r2_score(ytrain[:,target_idx].detach().numpy(), ypred.detach().numpy()) df_results.loc[('TRAIN', 'forward'), 'MSE'] = mse df_results.loc[('TRAIN', 'forward'), 'R2'] = r2 ypred = mm(xvalid) mse = mean_squared_error(yvalid[:,target_idx].detach().numpy(), ypred.detach().numpy()) r2 = r2_score(yvalid[:,target_idx].detach().numpy(), ypred.detach().numpy()) df_results.loc[('VALIDATION', 'forward'), 'MSE'] = mse df_results.loc[('VALIDATION', 'forward'), 'R2'] = r2 ypred = mm(xtest) mse = mean_squared_error(ytest[:,target_idx].detach().numpy(), ypred.detach().numpy()) r2 = r2_score(ytest[:,target_idx].detach().numpy(), ypred.detach().numpy()) df_results.loc[('TEST', 'forward'), 'MSE'] = mse df_results.loc[('TEST', 'forward'), 'R2'] = r2 # Lockdown results mm = MyNet(features, layer_sizes) mm.load_state_dict(torch.load('data_lockdown_05ResidentialBuilding/model05_backward02_best.pth')) mm = mm.to(device) mm.eval() ypred = mm(xtrain) mse = mean_squared_error(ytrain[:,target_idx].detach().numpy(), ypred.detach().numpy()) r2 = r2_score(ytrain[:,target_idx].detach().numpy(), ypred.detach().numpy()) df_results.loc[('TRAIN', 'lockdown'), 'MSE'] = mse df_results.loc[('TRAIN', 'lockdown'), 'R2'] = r2 ypred = mm(xvalid) mse = mean_squared_error(yvalid[:,target_idx].detach().numpy(), ypred.detach().numpy()) r2 = r2_score(yvalid[:,target_idx].detach().numpy(), ypred.detach().numpy()) df_results.loc[('VALIDATION', 'lockdown'), 'MSE'] = mse df_results.loc[('VALIDATION', 'lockdown'), 'R2'] = r2 ypred = mm(xtest) mse = mean_squared_error(ytest[:,target_idx].detach().numpy(), ypred.detach().numpy()) r2 = r2_score(ytest[:,target_idx].detach().numpy(), ypred.detach().numpy()) df_results.loc[('TEST', 'lockdown'), 'MSE'] = mse df_results.loc[('TEST', 'lockdown'), 'R2'] = r2 # Save data df_results.to_csv(os.path.join(data_out, 'results_target_lockdown02.csv'), index=True) print("model results saved at: '{}'".format(os.path.join(data_out, 'results_target_lockdown02.csv'))) df_results # Find Accuracy, Loss +/- STD (Fordward) target_idx = 1 data_output = 'data_unconstrained_05ResidentialBuilding' df_results = pd.DataFrame(columns=['train_mse', 'valid_mse', 'test_mse', \ 'train_r2', 'valid_r2', 'test_r2']) # df_results = pd.read_csv(os.path.join(data_output, 'accuracy_forward01.csv'), index_col=0) partitions = 100 iterator = tqdm.notebook.tqdm(range(1, partitions + 1), desc='Partitions loop') for n in iterator: X = pd.read_csv('dataset_05ResidentialBuilding/X.csv') y = pd.read_csv('dataset_05ResidentialBuilding/y.csv') split_data(X, y, seed1=seed1[n], seed2=seed2[n]) # .normalize data sets df_to_tensor() normalize_data() make_DataLoaders(target_idx=target_idx) # .train model (forward) lr = best_lr_fward layer_sizes = [5, 1] epochs = 50000 data_out = "data_unconstrained_05ResidentialBuilding/partitions" lock_flag = False features = xtrain.size(1) fname = 'model05_forward02_'+str(n) train_loss, valid_loss = train_model( lr, features, layer_sizes, lock_flag = lock_flag, epochs = epochs, early_stop = 10000, fname = fname) # .save relevant data f3 = [fname+'_last.pth', fname+'_best.pth'] save_output(data_out, 'train_loss02_'+str(n)+'.csv', 'valid_loss02_'+str(n)+'.csv', f3) # .find MSE, R2, etc... mm = MyNet(features, layer_sizes) mm.load_state_dict(torch.load(os.path.join(data_out, fname+'_best.pth'))) mm = mm.to(device) mm.eval() ypred = mm(xtrain) mse = mean_squared_error(ytrain.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel(), squared=True) r2 = r2_score(ytrain.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel()) df_results.loc[n, 'train_mse'] = mse df_results.loc[n, 'train_r2'] = r2 ypred = mm(xvalid) mse = mean_squared_error(yvalid.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel(), squared=True) r2 = r2_score(yvalid.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel()) df_results.loc[n, 'valid_mse'] = mse df_results.loc[n, 'valid_r2'] = r2 ypred = mm(xtest) mse = mean_squared_error(ytest.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel(), squared=True) r2 = r2_score(ytest.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel()) df_results.loc[n, 'test_mse'] = mse df_results.loc[n, 'test_r2'] = r2 df_results.to_csv(os.path.join(data_output, 'accuracy05_forward02.csv'), index=True) # Display results data_out = 'data_unconstrained_05ResidentialBuilding' accuracy = pd.read_csv(os.path.join(data_out, 'accuracy05_forward02.csv')) print("Train mse = {:.5f} +/- {:.5f}".format(accuracy['train_mse'].mean(), accuracy['train_mse'].std())) print("Valid mse = {:.5f} +/- {:.5f}".format(accuracy['valid_mse'].mean(), accuracy['valid_mse'].std())) print("Test mse = {:.5f} +/- {:.5f}".format(accuracy['test_mse'].mean(), accuracy['test_mse'].std())) print("") print("Train r2 = {:.5f} +/- {:.5f}".format(accuracy['train_r2'].mean(), accuracy['train_r2'].std())) print("Valid r2 = {:.5f} +/- {:.5f}".format(accuracy['valid_r2'].mean(), accuracy['valid_r2'].std())) print("Test r2 = {:.5f} +/- {:.5f}".format(accuracy['test_r2'].mean(), accuracy['test_r2'].std())) # Find Accuracy, Loss +/- STD (Lockdown: path 1) target_idx = 1 data_output = 'data_lockdown_05ResidentialBuilding' df_results = pd.DataFrame(columns=['train_mse', 'valid_mse', 'test_mse', \ 'train_r2', 'valid_r2', 'test_r2']) # df_results = pd.read_csv(os.path.join(data_output, 'accuracy_forward01.csv'), index_col=0) partitions = 100 iterator = tqdm.notebook.tqdm(range(1, partitions + 1), desc='Partitions loop') for n in iterator: X = pd.read_csv('dataset_05ResidentialBuilding/X.csv') y = pd.read_csv('dataset_05ResidentialBuilding/y.csv') split_data(X, y, seed1=seed1[n], seed2=seed2[n]) # .normalize data sets df_to_tensor() normalize_data() make_DataLoaders(target_idx=target_idx) # .train model (forward) lr = best_lr_bward layer_sizes = [5, 1] epochs = 30000 data_out = "data_lockdown_05ResidentialBuilding/partitions" lock_flag = False features = xtrain.size(1) fname = 'model05_forward02_'+str(n) train_loss, valid_loss = train_model( lr, features, layer_sizes, lock_flag = lock_flag, epochs = epochs, early_stop = epochs, fname = fname) # .save relevant data f3 = [fname+'_last.pth', fname+'_best.pth'] save_output(data_out, 'train_loss02_'+str(n)+'.csv', 'valid_loss02_'+str(n)+'.csv', f3) # .find MSE, R2, etc... mm = MyNet(features, layer_sizes) mm.load_state_dict(torch.load(os.path.join(data_out, fname+'_best.pth'))) mm = mm.to(device) mm.eval() ypred = mm(xtrain) mse = mean_squared_error(ytrain.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel(), squared=True) r2 = r2_score(ytrain.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel()) df_results.loc[n, 'train_mse'] = mse df_results.loc[n, 'train_r2'] = r2 ypred = mm(xvalid) mse = mean_squared_error(yvalid.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel(), squared=True) r2 = r2_score(yvalid.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel()) df_results.loc[n, 'valid_mse'] = mse df_results.loc[n, 'valid_r2'] = r2 ypred = mm(xtest) mse = mean_squared_error(ytest.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel(), squared=True) r2 = r2_score(ytest.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel()) df_results.loc[n, 'test_mse'] = mse df_results.loc[n, 'test_r2'] = r2 df_results.to_csv(os.path.join(data_output, 'accuracy05_fward02.csv'), index=True) # Find Accuracy, Loss +/- STD (lockdown: path 2) target_idx = 1 data_output = 'data_lockdown_05ResidentialBuilding' df_results = pd.DataFrame(columns=['train_mse', 'valid_mse', 'test_mse', \ 'train_r2', 'valid_r2', 'test_r2']) # df_results = pd.read_csv(os.path.join(data_output, 'accuracy_forward01.csv'), index_col=0) partitions = 100 iterator = tqdm.notebook.tqdm(range(1, partitions + 1), desc='Partitions loop') for n in iterator: X = pd.read_csv('dataset_05ResidentialBuilding/X.csv') y = pd.read_csv('dataset_05ResidentialBuilding/y.csv') split_data(X, y, seed1=seed1[n], seed2=seed2[n]) # .normalize data sets df_to_tensor() normalize_data() make_DataLoaders(target_idx=target_idx) # .train model (forward) lr = best_lr_bward layer_sizes = [5, 1] epochs = 20000 lock_flag = True features = xtrain.size(1) step = 1 data_out = "data_lockdown_05ResidentialBuilding/partitions" model_forward_name = 'data_lockdown_05ResidentialBuilding/partitions/model05_forward02_'+str(n)+'_last.pth' fname = 'model05_backward02_'+str(n) train_loss, valid_loss = train_model( lr, features, layer_sizes, lock_flag = lock_flag, epochs = epochs, early_stop = epochs, fname = fname) # .save relevant data f3 = [fname+'_best.pth'] save_output(data_out, 'train_loss_bward02_'+str(n)+'.csv', 'valid_loss_bward02_'+str(n)+'.csv', f3) # .find MSE, R2, etc... mm = MyNet(features, layer_sizes) mm.load_state_dict(torch.load(os.path.join(data_out, fname+'_best.pth'))) mm = mm.to(device) mm.eval() ypred = mm(xtrain) mse = mean_squared_error(ytrain.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel(), squared=True) r2 = r2_score(ytrain.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel()) df_results.loc[n, 'train_mse'] = mse df_results.loc[n, 'train_r2'] = r2 ypred = mm(xvalid) mse = mean_squared_error(yvalid.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel(), squared=True) r2 = r2_score(yvalid.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel()) df_results.loc[n, 'valid_mse'] = mse df_results.loc[n, 'valid_r2'] = r2 ypred = mm(xtest) mse = mean_squared_error(ytest.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel(), squared=True) r2 = r2_score(ytest.detach().numpy()[:,target_idx], ypred.detach().numpy().ravel()) df_results.loc[n, 'test_mse'] = mse df_results.loc[n, 'test_r2'] = r2 df_results.to_csv(os.path.join(data_output, 'accuracy05_backward02.csv'), index=True) # Display results data_out = 'data_lockdown_05ResidentialBuilding' accuracy = pd.read_csv(os.path.join(data_out, 'accuracy05_backward02.csv')) print("Train mse = {:.5f} +/- {:.5f}".format(accuracy['train_mse'].mean(), accuracy['train_mse'].std())) print("Valid mse = {:.5f} +/- {:.5f}".format(accuracy['valid_mse'].mean(), accuracy['valid_mse'].std())) print("Test mse = {:.5f} +/- {:.5f}".format(accuracy['test_mse'].mean(), accuracy['test_mse'].std())) print("") print("Train r2 = {:.5f} +/- {:.5f}".format(accuracy['train_r2'].mean(), accuracy['train_r2'].std())) print("Valid r2 = {:.5f} +/- {:.5f}".format(accuracy['valid_r2'].mean(), accuracy['valid_r2'].std())) print("Test r2 = {:.5f} +/- {:.5f}".format(accuracy['test_r2'].mean(), accuracy['test_r2'].std())) ###Output Train mse = 0.01111 +/- 0.00586 Valid mse = 0.02360 +/- 0.01449 Test mse = 0.02986 +/- 0.02018 Train r2 = 0.98889 +/- 0.00586 Valid r2 = 0.97767 +/- 0.01120 Test r2 = 0.97132 +/- 0.01634 ###Markdown Comparison Target 2 ###Code # Read distributions accuracy_lockdown02 = pd.read_csv('data_lockdown_05ResidentialBuilding/accuracy05_backward02.csv', index_col=0) accuracy_forward02 = pd.read_csv('data_unconstrained_05ResidentialBuilding/accuracy05_forward02.csv', index_col=0) accuracy_xgb02 = pd.read_csv('data_xgb_05ResidentialBuilding/accuracy_xgb02.csv') accuracy_lasso02 = pd.read_csv('data_lasso_05ResidentialBuilding/accuracy_lasso02.csv') # R2 & MSE print("R2 on 'test' set:") print("Lockdown = {:.4f} +/- {:.3f}".format(accuracy_lockdown02['test_r2'].mean(), accuracy_lockdown02['test_r2'].std())) print("Forward = {:.4f} +/- {:.3f}".format(accuracy_forward02['test_r2'].mean(), accuracy_forward02['test_r2'].std())) print("xgboost = {:.4f} +/- {:.3f}".format(accuracy_xgb02['test_r2'].mean(), accuracy_xgb02['test_r2'].std())) print("Lasso = {:.4f} +/- {:.3f}".format(accuracy_lasso02['test_r2'].mean(), accuracy_lasso02['test_r2'].std())) print("\nR2 on 'validation' set:") print("Lockdown = {:.4f} +/- {:.3f}".format(accuracy_lockdown02['valid_r2'].mean(), accuracy_lockdown02['valid_r2'].std())) print("Forward = {:.4f} +/- {:.3f}".format(accuracy_forward02['valid_r2'].mean(), accuracy_forward02['valid_r2'].std())) print("xgboost = {:.4f} +/- {:.3f}".format(accuracy_xgb02['valid_r2'].mean(), accuracy_xgb02['valid_r2'].std())) print("Lasso = {:.4f} +/- {:.3f}".format(accuracy_lasso02['valid_r2'].mean(), accuracy_lasso02['valid_r2'].std())) # Print out test results print("On 'test' set:") statistic, pvalue = stats.ttest_rel(accuracy_lockdown02['test_r2'], accuracy_forward02['test_r2']) print("Lockdown vs Forward: pvalue={:e}, statistic={:e}".format(pvalue, statistic)) statistic, pvalue = stats.ttest_rel(accuracy_lockdown02['test_r2'], accuracy_xgb02['test_r2']) print("Lockdown vs xgboost: pvalue={:e}, statistic={:e}".format(pvalue, statistic)) statistic, pvalue = stats.ttest_rel(accuracy_lockdown02['test_r2'], accuracy_lasso02['test_r2']) print("Lockdown vs lasso: pvalue={:e}, statistic={:e}".format(pvalue, statistic)) # print("\nOn 'validation' set:") statistic, pvalue = stats.ttest_rel(accuracy_lockdown02['valid_r2'], accuracy_forward02['valid_r2']) print("Lockdown vs Forward: pvalue={:e}, statistic={:e}".format(pvalue, statistic)) statistic, pvalue = stats.ttest_rel(accuracy_lockdown02['valid_r2'], accuracy_xgb02['valid_r2']) print("Lockdown vs xgboost: pvalue={:e}, statistic={:e}".format(pvalue, statistic)) statistic, pvalue = stats.ttest_rel(accuracy_lockdown02['valid_r2'], accuracy_lasso02['valid_r2']) print("Lockdown vs lasso: pvalue={:e}, statistic={:e}".format(pvalue, statistic)) # Relative Root Mean Squared Error rrmse_lasso = np.sqrt(1.0 - accuracy_lasso02['test_r2']) print("Lasso = {:.3f} +/- {:.3f}".format(rrmse_lasso.mean(), rrmse_lasso.std())) rrmse_GB = np.sqrt(1.0 - accuracy_xgb02['test_r2']) print("GB = {:.3f} +/- {:.3f}".format(rrmse_GB.mean(), rrmse_GB.std())) rrmse_fcnn = np.sqrt(1.0 - accuracy_forward02['test_r2']) print("FCNN = {:.3f} +/- {:.3f}".format(rrmse_fcnn.mean(), rrmse_fcnn.std())) rrmse_lockout = np.sqrt(1.0 - accuracy_lockdown02['test_r2']) print("Lockout = {:.3f} +/- {:.3f}".format(rrmse_lockout.mean(), rrmse_lockout.std())) # Print out test results print("\nOn 'test' set:") statistic, pvalue = stats.ttest_rel(rrmse_lockout, rrmse_fcnn) print("Lockdown vs Forward: pvalue={:.2e}, statistic={:.2e}".format(pvalue, statistic)) statistic, pvalue = stats.ttest_rel(rrmse_lockout, rrmse_GB) print("Lockdown vs xgboost: pvalue={:.2e}, statistic={:.2e}".format(pvalue, statistic)) statistic, pvalue = stats.ttest_rel(rrmse_lockout, rrmse_lasso) print("Lockdown vs lasso: pvalue={:.2e}, statistic={:.2e}".format(pvalue, statistic)) ###Output Lasso = 0.197 +/- 0.046 GB = 0.212 +/- 0.047 FCNN = 0.180 +/- 0.057 Lockout = 0.164 +/- 0.042 On 'test' set: Lockdown vs Forward: pvalue=9.69e-04, statistic=-3.40e+00 Lockdown vs xgboost: pvalue=9.82e-20, statistic=-1.14e+01 Lockdown vs lasso: pvalue=1.19e-13, statistic=-8.61e+00
Cryptopals.ipynb
###Markdown [the cryptopals crypto challenges](https://www.cryptopals.com/)* [Set 1: Basics](set1)* Set 2: Block Crypto* Set 3: Block and Stream Crypto* Set 4: Stream crypto and randomness* Set 5: Diffie-Hellman and friends* Set 6: RSA and DSA* Set 7: Hashes* Set 8: Abstract Algebra Welcome to the challengesWe can't introduce these any better than [Maciej Ceglowski](https://blog.pinboard.in/2013/04/the_matasano_crypto_challenges/) did, so read that blog post first.We've built a collection of 48 exercises that demonstrate attacks on real-world crypto.This is a different way to learn about crypto than taking a class or reading a book. We give you problems to solve. They're derived from weaknesses in real-world systems and modern cryptographic constructions. We give you enough info to learn about the underlying crypto concepts yourself. When you're finished, you'll not only have learned a good deal about how cryptosystems are built, but you'll also understand how they're attacked. What Are The Rules?There aren't any! For several years, we ran these challenges over email, and asked participants not to share their results. *The honor system worked beautifully!* But now we're ready to set aside the ceremony and just publish the challenges for everyone to work on. How Much Math Do I Need To Know?If you have any trouble with the math in these problems, you should be able to find a local 9th grader to help you out. It turns out that many modern crypto attacks don't involve much hard math. How Much Crypto Do I Need To Know?None. That's the point. So What Do I Need To Know?You'll want to be able to code proficiently in any language. We've received submissions in C, C++, Python, Ruby, Perl, Visual Basic, X86 Assembly, Haskell, and Lisp. Surprise us with another language. Our friend Maciej says these challenges are a good way to learn a new language, so maybe now's the time to pick up Clojure or Rust. What Should I Expect?Right now, we have eight sets. They get progressively harder. Again: these are based off real-world vulnerabilities. None of them are "puzzles". They're not designed to trip you up. Some of the attacks are clever, though, and if you're not familiar with crypto cleverness... well, you should like solving puzzles. An appreciation for early-90's MTV hip-hop can't hurt either. Can You Give Us A Long-Winded Indulgent Description For Why You'Ve Chosen To Do This?*It turns out that we can.*If you're not that familiar with crypto already, or if your familiarity comes mostly from things like Applied Cryptography, this fact may surprise you: most crypto is fatally broken. The systems we're relying on today that aren't known to be fatally broken are in a state of just waiting to be fatally broken. Nobody is sure that TLS 1.2 or SSH 2 or OTR are going to remain safe as designed.The current state of crypto software security is similar to the state of software security in the 1990s. Specifically: until around 1995, it was not common knowledge that software built by humans might have trouble counting. As a result, nobody could size a buffer properly, and humanity incurred billions of dollars in cleanup after a decade and a half of emergency fixes for memory corruption vulnerabilities.Counting is not a hard problem. But cryptography is. There are just a few things you can screw up to get the size of a buffer wrong. There are tens, probably hundreds, of obscure little things you can do to take a cryptosystem that should be secure even against an adversary with more CPU cores than there are atoms in the solar system, and make it solveable with a Perl script and 15 seconds. Don't take our word for it: do the challenges and you'll see.People "know" this already, but they don't really know it in their gut, and we think the reason for that is that very few people actually know how to implement the best-known attacks. So, mail us, and we'll give you a tour of them. How do I start?[Start here!](basics) Who did this?* Thomas Ptacek (@tqbf)* Sean Devlin (@spdevlin)* Alex Balducci (@iamalexalright)* Marcin Wielgoszewski (@marcinw)Cryptopals is maintained and expanded (from Set 8 on) by Sean Devlin, in conjunction with the [Cryptography Services Team](https://www.nccgroup.trust/us/our-services/security-consulting/cryptography-services/) at [NCC Group](https://www.nccgroup.trust/us/).We could not possibly have done this without the help of several other people. Roughly in order of influence:* [Nate Lawson](http://www.rootlabs.com/) taught us virtually everything we know about cryptography.* [Trevor Perrin](http://trevp.net/) taught Nate some of that. I can tell you a pretty compelling story about how Trevor is the intellectual origin of every successful attack on TLS over the past 5 years.* Thai Duong and Juliano Rizzo are the godfathers of practical cryptographic software security. Several things in this challenge didn't make sense to us until after Thai and Juliano exploited them in mainstream software.LegalIndividual exercise submissions are owned by their author, and may or may not be distributed under an open source license. ###Code #@title Vanilla Ice { vertical-output: true } #@markdown Setting the tempo with this early-90's MTV hip-hop song from IPython.display import YouTubeVideo YouTubeVideo('rog8ou-ZepE', width=600, height=400) ###Output _____no_output_____ ###Markdown Crypto Challenge Set 1 This is the **qualifying set**. We picked the exercises in it to ramp developers up gradually into coding cryptography, but also to verify that we were working with people who were ready to write code.This set is **relatively easy**. With one exception, most of these exercises should take only a couple minutes. But don't beat yourself up if it takes longer than that. It took Alex two weeks to get through the set!If you've written any crypto code in the past, you're going to feel like skipping a lot of this. **Don't skip them**. At least two of them (we won't say which) are important stepping stones to later attacks.* [Convert hex to base64](ex1) * [Fixed XOR](ex2)* [Single-byte XOR cipher](ex3)* [Detect single-character XOR](ex4)* [Implement repeating-key XOR](ex5)* [Break repeating-key XOR](ex6)* [AES in ECB mode](ex7)* [Detect AES in ECB mode](ex8) Convert hex to base64 The string: ###Code inp = '49276d206b696c6c696e6720796f757220627261696e206c696b65206120706f69736f6e6f7573206d757368726f6f6d' ###Output _____no_output_____ ###Markdown Should produce: ###Code exp = 'SSdtIGtpbGxpbmcgeW91ciBicmFpbiBsaWtlIGEgcG9pc29ub3VzIG11c2hyb29t' ###Output _____no_output_____ ###Markdown So go ahead and make that happen. You'll need to use this code for the rest of the exercises. code here ###Code from base64 import b64encode def hex_to_base64(dat): raw = bytes.fromhex(dat) print(raw.decode('utf-8')) return b64encode(raw).decode('utf-8') ###Output _____no_output_____ ###Markdown test here ###Code got = hex_to_base64(inp) assert got == exp, "got {} != exp {}".format(got, exp) ###Output I'm killing your brain like a poisonous mushroom ###Markdown ``````**Cryptopals Rule*****```Always operate on raw bytes, never on encoded strings. Only use hex and base64 for pretty-printing. ``` Fixed XORWrite a function that takes two equal-length buffers and produces their XOR combination.If your function works properly, then when you feed it the string: ###Code inp1 = '1c0111001f010100061a024b53535009181c' ###Output _____no_output_____ ###Markdown ... after hex decoding, and when XOR'd against: ###Code inp2 = '686974207468652062756c6c277320657965' ###Output _____no_output_____ ###Markdown ... should produce: ###Code exp = '746865206b696420646f6e277420706c6179' ###Output _____no_output_____ ###Markdown code here ###Code def fixed_xor(buf1, buf2): if len(buf1) > len(buf2): buf1 = buf1[:len(buf2)] assert len(buf1) == len(buf2), "buf1 {} != buf2 {}"\ .format(len(buf1), len(buf2)) buf1_decoded = bytes.fromhex(buf1) buf2_decoded = bytes.fromhex(buf2) return bytes([a^b for (a, b) in zip(buf1_decoded, buf2_decoded)]) ###Output _____no_output_____ ###Markdown test here ###Code got = fixed_xor(inp1, inp2) assert got == bytes.fromhex(exp), "got {} != exp {}".format(got, exp) print (got.decode('utf-8')) ###Output the kid don't play ###Markdown Single-byte XOR cipher The hex encoded string: ###Code inp = '1b37373331363f78151b7f2b783431333d78397828372d363c78373e783a393b3736' ###Output _____no_output_____ ###Markdown ... has been XOR'd against a single character. Find the key, decrypt the message.You can do this by hand. But don't: write code to do it for you.How? Devise some method for "scoring" a piece of English plaintext. Character frequency is a good metric. Evaluate each output and choose the one with the best score.``````**Achievement Unlocked*****```You now have our permission to make "ETAOIN SHRDLU" jokes on Twitter. ```code here: ###Code import operator def single_byte_xor_cipher(data): raw = bytes.fromhex(data) character_frequency = 'etaoin shrdlu' score = {} for key in range(256): frequency_histogram = dict(zip(character_frequency, [0 for _ in range(len(character_frequency))])) for r in raw: c = chr(r ^ key) if c in frequency_histogram: frequency_histogram[c] += 1 score[key] = sum(frequency_histogram.values()) key = max(score.items(), key=operator.itemgetter(1))[0] return bytes([r ^ key for r in raw]).decode() ###Output _____no_output_____ ###Markdown test here: ###Code got = single_byte_xor_cipher(inp) exp = "Cooking MC's like a pound of bacon" assert got == exp, "got {} != exp {}".format(got, exp) print(exp) ###Output Cooking MC's like a pound of bacon
advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-inference.ipynb
###Markdown Mask-RCNN Model Inference in Amazon SageMakerThis notebook is a step-by-step tutorial on [Mask R-CNN](https://arxiv.org/abs/1703.06870) model inference using [Amazon SageMaker model deployment hosting service](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html).To get started, we initialize an Amazon execution role and initialize a `boto3` session to find our AWS region name. ###Code import boto3 import sagemaker from sagemaker import get_execution_role role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role print(f'SageMaker Execution Role:{role}') session = boto3.session.Session() aws_region = session.region_name print(f'AWS region:{aws_region}') ###Output _____no_output_____ ###Markdown Build and Push Amazon SageMaker Serving Container ImagesFor this step, the [IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached to this notebook instance needs full access to [Amazon ECR service](https://aws.amazon.com/ecr/). If you created this notebook instance using the ```./stack-sm.sh``` script in this repository, the IAM Role attached to this notebook instance is already setup with full access to Amazon ECR service. Below, we have a choice of two different models for doing inference:1. [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN)2. [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow)It is recommended that you build and push both Amazon SageMaker serving container images below and use one of the two container images for serving the model from an Amazon SageMaker Endpoint. Build and Push TensorPack Faster-RCNN/Mask-RCNN Serving Container ImageUse ```./container-serving/build_tools/build_and_push.sh``` script to build and push the [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) serving container image to Amazon ECR. ###Code !cat ./container-serving/build_tools/build_and_push.sh ###Output _____no_output_____ ###Markdown Using your *AWS region* as argument, run the cell below. ###Code %%time ! ./container-serving/build_tools/build_and_push.sh {aws_region} ###Output _____no_output_____ ###Markdown Set ```tensorpack_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code tensorpack_image = #<amazon-ecr-uri> ###Output _____no_output_____ ###Markdown Build and Push AWS Samples Mask R-CNN Serving Container ImageUse ```./container-serving-optimized/build_tools/build_and_push.sh``` script to build and push the [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container image to Amazon ECR. ###Code !cat ./container-serving-optimized/build_tools/build_and_push.sh ###Output _____no_output_____ ###Markdown Using your *AWS region* as argument, run the cell below. ###Code %%time ! ./container-serving-optimized/build_tools/build_and_push.sh {aws_region} ###Output _____no_output_____ ###Markdown Set ```aws_samples_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code aws_samples_image = #<amazon-ecr-uri> ###Output _____no_output_____ ###Markdown Select Serving Container ImageAbove, we built and pushed [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) and [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container images to Amazon ECR. Now we are ready to deploy our trained model to an Amazon SageMaker Endpoint using one of the two container images.Next, we set ```serving_image``` to either the `tensorpack_image` or the `aws_samples_image` variable you defined above, making sure that the serving container image we set below matches our trained model. ###Code serving_image = # set to tensorpack_image or aws_samples_image variable (no string quotes) print(f'serving image: {serving_image}') ###Output _____no_output_____ ###Markdown Create Amazon SageMaker Session Next, we create a SageMaker session. ###Code sagemaker_session = sagemaker.session.Session(boto_session=session) ###Output _____no_output_____ ###Markdown Define Amazon SageMaker ModelNext, we define an Amazon SageMaker Model that defines the deployed model we will serve from an Amazon SageMaker Endpoint. ###Code model_name= 'mask-rcnn-model-1'# Name of the model ###Output _____no_output_____ ###Markdown This model assumes you are using ResNet-50 pre-trained model weights for the ResNet backbone. If this is not true, please adjust `PRETRAINED_MODEL` value below. Please ensure that the `s3_model_url` of your trained model used below is consistent with the container `serving_image` you set above. ###Code s3_model_url = # Trained Model Amazon S3 URI in the format s3://<your path>/model.tar.gz serving_container_def = { 'Image': serving_image, 'ModelDataUrl': s3_model_url, 'Mode': 'SingleModel', 'Environment': { 'SM_MODEL_DIR' : '/opt/ml/model', 'RESNET_ARCH': 'resnet50' # 'resnet50' or 'resnet101' } } create_model_response = sagemaker_session.create_model(name=model_name, role=role, container_defs=serving_container_def) print(create_model_response) ###Output _____no_output_____ ###Markdown Next, we set the name of the Amaozn SageMaker hosted service endpoint configuration. ###Code endpoint_config_name=f'{model_name}-endpoint-config' print(endpoint_config_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker hosted service endpoint configuration that uses one instance of `ml.p3.2xlarge` to serve the model. ###Code epc = sagemaker_session.create_endpoint_config( name=endpoint_config_name, model_name=model_name, initial_instance_count=1, instance_type='ml.p3.2xlarge') print(epc) ###Output _____no_output_____ ###Markdown Next we specify the Amazon SageMaker endpoint name for the endpoint used to serve the model. ###Code endpoint_name=f'{model_name}-endpoint' print(endpoint_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker endpoint using the endpoint configuration we created above. ###Code ep=sagemaker_session.create_endpoint(endpoint_name=endpoint_name, config_name=endpoint_config_name, wait=True) print(ep) ###Output _____no_output_____ ###Markdown Now that the Amazon SageMaker endpoint is in service, we will use the endpoint to do inference for test images. Next, we download [COCO 2017 Test images](http://cocodataset.org/download). ###Code !wget -O ~/test2017.zip http://images.cocodataset.org/zips/test2017.zip ###Output _____no_output_____ ###Markdown We extract the downloaded COCO 2017 Test images to the home directory. ###Code !unzip -q -d ~/ ~/test2017.zip !rm ~/test2017.zip ###Output _____no_output_____ ###Markdown Below, we will use the downloaded COCO 2017 Test images to test our deployed Mask R-CNN model. However, in order to visualize the detection results, we need to define some helper functions. Visualization Helper FunctionsNext, we define a helper function to convert COCO Run Length Encoding (RLE) to a binary image mask. The RLE encoding is a dictionary with two keys `counts` and `size`. The `counts` value is a list of counts of run lengths of alternating 0s and 1s for an image binary mask for a specific instance segmentation, with the image is scanned row-wise. The `counts` list starts with a count of 0s. If the binary mask value at `(0,0)` pixel is 1, then the `counts` list starts with a `0`. The `size` value is a list containing image height and width. ###Code import numpy as np def rle_to_binary_mask(rle, img_shape): value = 0 mask_array = [] for count in rle: mask_array.extend([int(value)]*count) value = (value + 1) % 2 assert len(mask_array) == img_shape[0]*img_shape[1] b_mask = np.array(mask_array, dtype=np.uint8).reshape(img_shape) return b_mask ###Output _____no_output_____ ###Markdown Next, we define a helper function for generating random colors for visualizing detection results. ###Code import colorsys import random def random_colors(N, bright=False): brightness = 1.0 if bright else 0.7 hsv = [(i / N, 1, brightness) for i in range(N)] colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv)) random.shuffle(colors) return colors ###Output _____no_output_____ ###Markdown Next, we define a helper function to apply an image binary mask for an instance segmentation to the image. Each image binary mask is of the size of the image. ###Code def apply_mask(image, mask, color, alpha=0.5): a_mask = np.stack([mask]*3, axis=2).astype(np.int8) for c in range(3): image[:, :, c] = np.where(mask == 1, image[:, :, c] *(1 - alpha) + alpha * color[c]*255,image[:, :, c]) return image ###Output _____no_output_____ ###Markdown Next, we define a helper function to show the applied detection results. ###Code import matplotlib.pyplot as plt from matplotlib import patches def show_detection_results(img=None, annotations=None): """ img: image numpy array annotations: annotations array for image where each annotation is in COCO format """ num_annotations = len(annotations) colors = random_colors(num_annotations) fig,ax = plt.subplots(figsize=(img.shape[1]//50, img.shape[0]//50)) for i, a in enumerate(annotations): segm = a['segmentation'] img_shape = tuple(segm['size']) rle = segm['counts'] binary_image_mask = rle_to_binary_mask(rle, img_shape) bbox = a['bbox'] category_id = a['category_id'] category_name = a['category_name'] # select color from random colors color = colors[i] # Show bounding box bbox_x, bbox_y, bbox_w, bbox_h = bbox box_patch = patches.Rectangle((bbox_x, bbox_y), bbox_w, bbox_h, linewidth=1, alpha=0.7, linestyle="dashed", edgecolor=color, facecolor='none') ax.add_patch(box_patch) label = f'{category_name}:{category_id}' ax.text(bbox_x, bbox_y + 8, label, color='w', size=11, backgroundcolor="none") # Show mask img = apply_mask(img, binary_image_mask.astype(np.bool), color) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Visualize Detection ResultsNext, we select a random image from COCO 2017 Test image dataset. After you are done visualizing the detection results for this image, you can come back to the cell below and select your next random image to test. ###Code import os import random test2017_dir=os.path.join(os.environ['HOME'], "test2017") img_id=random.choice(os.listdir(test2017_dir)) img_local_path = os.path.join(test2017_dir,img_id) print(img_local_path) ###Output _____no_output_____ ###Markdown Next, we read the image and convert it from BGR color to RGB color format. ###Code import cv2 img=cv2.imread(img_local_path, cv2.IMREAD_COLOR) print(img.shape) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) ###Output _____no_output_____ ###Markdown Next, we show the image that we randomly selected. ###Code fig,ax = plt.subplots(figsize=(img.shape[1]//50, img.shape[0]//50)) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Next, we invoke the Amazon SageMaker Endpoint to detect objects in the test image that we randomly selected.This REST API endpoint only accepts HTTP POST requests with `ContentType` set to `application/json`. The content of the POST request must conform to following JSON schema:`{ "img_id": "YourImageId", "img_data": "Base64 encoded image file content, encoded as utf-8 string" }`The response of the POST request conforms to following JSON schema:`{ "annotations": [ { "bbox": [X, Y, width, height], "category_id": "class id", "category_name": "class name", "segmentation": { "counts": [ run-length-encoding, ], "size": [height, width]} }, ] }` ###Code import boto3 import base64 import json client = boto3.client('sagemaker-runtime') with open(img_local_path, "rb") as image_file: img_data = base64.b64encode(image_file.read()) data = {"img_id": img_id} data["img_data"] = img_data.decode('utf-8') body=json.dumps(data).encode('utf-8') response = client.invoke_endpoint(EndpointName=endpoint_name, ContentType="application/json", Accept="application/json", Body=body) body=response['Body'].read() msg=body.decode('utf-8') data=json.loads(msg) assert data is not None ###Output _____no_output_____ ###Markdown The response from the endpoint includes annotations for the detected objects in COCO annotations format. Next, we aplly all the detection results to the image. ###Code annotations = data['annotations'] show_detection_results(img, annotations) ###Output _____no_output_____ ###Markdown Delete SageMaker Endpoint, Endpoint Config and ModelIf you are done testing, delete the deployed Amazon SageMaker endpoint, endpoint config, and the model below. The trained model in S3. bucket is not deleted. If you are not done testing, go back to the section Visualize Detection Results and select another test image. ###Code sagemaker_session.delete_endpoint(endpoint_name=endpoint_name) sagemaker_session.delete_endpoint_config(endpoint_config_name=endpoint_config_name) sagemaker_session.delete_model(model_name=model_name) ###Output _____no_output_____ ###Markdown Mask-RCNN Model Inference in Amazon SageMakerThis notebook is a step-by-step tutorial on [Mask R-CNN](https://arxiv.org/abs/1703.06870) model inference using [Amazon SageMaker model deployment hosting service](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html).To get started, we initialize an Amazon execution role and initialize a `boto3` session to find our AWS region name. ###Code import boto3 import sagemaker from sagemaker import get_execution_role role = ( get_execution_role() ) # provide a pre-existing role ARN as an alternative to creating a new role print(f"SageMaker Execution Role:{role}") session = boto3.session.Session() aws_region = session.region_name print(f"AWS region:{aws_region}") ###Output _____no_output_____ ###Markdown Build and Push Amazon SageMaker Serving Container ImagesFor this step, the [IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached to this notebook instance needs full access to [Amazon ECR service](https://aws.amazon.com/ecr/). If you created this notebook instance using the ```./stack-sm.sh``` script in this repository, the IAM Role attached to this notebook instance is already setup with full access to Amazon ECR service. Below, we have a choice of two different models for doing inference:1. [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN)2. [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow)It is recommended that you build and push both Amazon SageMaker serving container images below and use one of the two container images for serving the model from an Amazon SageMaker Endpoint. Build and Push TensorPack Faster-RCNN/Mask-RCNN Serving Container ImageUse ```./container-serving/build_tools/build_and_push.sh``` script to build and push the [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) serving container image to Amazon ECR. ###Code !cat ./container-serving/build_tools/build_and_push.sh ###Output _____no_output_____ ###Markdown Using your *AWS region* as argument, run the cell below. ###Code %%time ! ./container-serving/build_tools/build_and_push.sh {aws_region} ###Output _____no_output_____ ###Markdown Set ```tensorpack_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code tensorpack_image = # mask-rcnn-tensorpack-serving-sagemaker ECR URI ###Output _____no_output_____ ###Markdown Build and Push AWS Samples Mask R-CNN Serving Container ImageUse ```./container-serving-optimized/build_tools/build_and_push.sh``` script to build and push the [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container image to Amazon ECR. ###Code !cat ./container-serving-optimized/build_tools/build_and_push.sh ###Output _____no_output_____ ###Markdown Using your *AWS region* as argument, run the cell below. ###Code %%time ! ./container-serving-optimized/build_tools/build_and_push.sh {aws_region} ###Output _____no_output_____ ###Markdown Set ```aws_samples_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code aws_samples_image = # mask-rcnn-tensorflow-serving-sagemaker ECR URI ###Output _____no_output_____ ###Markdown Select Serving Container ImageAbove, we built and pushed [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) and [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container images to Amazon ECR. Now we are ready to deploy our trained model to an Amazon SageMaker Endpoint using one of the two container images.Next, we set ```serving_image``` to either the `tensorpack_image` or the `aws_samples_image` variable you defined above, making sure that the serving container image we set below matches our trained model. ###Code serving_image = # set to tensorpack_image or aws_samples_image variable (no string quotes) print(f'serving image: {serving_image}') ###Output _____no_output_____ ###Markdown Create Amazon SageMaker Session Next, we create a SageMaker session. ###Code sagemaker_session = sagemaker.session.Session(boto_session=session) ###Output _____no_output_____ ###Markdown Define Amazon SageMaker ModelNext, we define an Amazon SageMaker Model that defines the deployed model we will serve from an Amazon SageMaker Endpoint. ###Code model_name = "mask-rcnn-model-1" # Name of the model ###Output _____no_output_____ ###Markdown This model assumes you are using ResNet-50 pre-trained model weights for the ResNet backbone. If this is not true, please adjust `PRETRAINED_MODEL` value below. Please ensure that the `s3_model_url` of your trained model used below is consistent with the container `serving_image` you set above. ###Code s3_model_url = # Trained Model Amazon S3 URI in the format s3://<your path>/model.tar.gz serving_container_def = { 'Image': serving_image, 'ModelDataUrl': s3_model_url, 'Mode': 'SingleModel', 'Environment': { 'SM_MODEL_DIR' : '/opt/ml/model', 'RESNET_ARCH': 'resnet50' # 'resnet50' or 'resnet101' } } create_model_response = sagemaker_session.create_model(name=model_name, role=role, container_defs=serving_container_def) print(create_model_response) ###Output _____no_output_____ ###Markdown Next, we set the name of the Amaozn SageMaker hosted service endpoint configuration. ###Code endpoint_config_name = f"{model_name}-endpoint-config" print(endpoint_config_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker hosted service endpoint configuration that uses one instance of `ml.p3.2xlarge` to serve the model. ###Code epc = sagemaker_session.create_endpoint_config( name=endpoint_config_name, model_name=model_name, initial_instance_count=1, instance_type="ml.g4dn.xlarge", ) print(epc) ###Output _____no_output_____ ###Markdown Next we specify the Amazon SageMaker endpoint name for the endpoint used to serve the model. ###Code endpoint_name = f"{model_name}-endpoint" print(endpoint_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker endpoint using the endpoint configuration we created above. ###Code ep = sagemaker_session.create_endpoint( endpoint_name=endpoint_name, config_name=endpoint_config_name, wait=True ) print(ep) ###Output _____no_output_____ ###Markdown Now that the Amazon SageMaker endpoint is in service, we will use the endpoint to do inference for test images. Next, we download [COCO 2017 Test images](http://cocodataset.org/download). ###Code !wget -O ./test2017.zip http://images.cocodataset.org/zips/test2017.zip ###Output _____no_output_____ ###Markdown We extract the downloaded COCO 2017 Test images to the home directory. ###Code !unzip -q ./test2017.zip !rm ./test2017.zip ###Output _____no_output_____ ###Markdown Below, we will use the downloaded COCO 2017 Test images to test our deployed Mask R-CNN model. However, in order to visualize the detection results, we need to define some helper functions. Visualization Helper FunctionsNext, we define a helper function to convert COCO Run Length Encoding (RLE) to a binary image mask. The RLE encoding is a dictionary with two keys `counts` and `size`. The `counts` value is a list of counts of run lengths of alternating 0s and 1s for an image binary mask for a specific instance segmentation, with the image is scanned row-wise. The `counts` list starts with a count of 0s. If the binary mask value at `(0,0)` pixel is 1, then the `counts` list starts with a `0`. The `size` value is a list containing image height and width. ###Code import numpy as np def rle_to_binary_mask(rle, img_shape): value = 0 mask_array = [] for count in rle: mask_array.extend([int(value)] * count) value = (value + 1) % 2 assert len(mask_array) == img_shape[0] * img_shape[1] b_mask = np.array(mask_array, dtype=np.uint8).reshape(img_shape) return b_mask ###Output _____no_output_____ ###Markdown Next, we define a helper function for generating random colors for visualizing detection results. ###Code import colorsys import random def random_colors(N, bright=False): brightness = 1.0 if bright else 0.7 hsv = [(i / N, 1, brightness) for i in range(N)] colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv)) random.shuffle(colors) return colors ###Output _____no_output_____ ###Markdown Next, we define a helper function to apply an image binary mask for an instance segmentation to the image. Each image binary mask is of the size of the image. ###Code def apply_mask(image, mask, color, alpha=0.5): a_mask = np.stack([mask] * 3, axis=2).astype(np.int8) for c in range(3): image[:, :, c] = np.where( mask == 1, image[:, :, c] * (1 - alpha) + alpha * color[c] * 255, image[:, :, c] ) return image ###Output _____no_output_____ ###Markdown Next, we define a helper function to show the applied detection results. ###Code import matplotlib.pyplot as plt from matplotlib import patches def show_detection_results(img=None, annotations=None): """ img: image numpy array annotations: annotations array for image where each annotation is in COCO format """ num_annotations = len(annotations) colors = random_colors(num_annotations) fig, ax = plt.subplots(figsize=(img.shape[1] // 50, img.shape[0] // 50)) for i, a in enumerate(annotations): segm = a["segmentation"] img_shape = tuple(segm["size"]) rle = segm["counts"] binary_image_mask = rle_to_binary_mask(rle, img_shape) bbox = a["bbox"] category_id = a["category_id"] category_name = a["category_name"] # select color from random colors color = colors[i] # Show bounding box bbox_x, bbox_y, bbox_w, bbox_h = bbox box_patch = patches.Rectangle( (bbox_x, bbox_y), bbox_w, bbox_h, linewidth=1, alpha=0.7, linestyle="dashed", edgecolor=color, facecolor="none", ) ax.add_patch(box_patch) label = f"{category_name}:{category_id}" ax.text(bbox_x, bbox_y + 8, label, color="w", size=11, backgroundcolor="none") # Show mask img = apply_mask(img, binary_image_mask.astype(np.bool), color) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Visualize Detection ResultsNext, we select a random image from COCO 2017 Test image dataset. After you are done visualizing the detection results for this image, you can come back to the cell below and select your next random image to test. ###Code import os import random test2017_dir = os.path.join(".", "test2017") img_id = random.choice(os.listdir(test2017_dir)) img_local_path = os.path.join(test2017_dir, img_id) print(img_local_path) ###Output _____no_output_____ ###Markdown Next, we read the image and convert it from BGR color to RGB color format. ###Code import cv2 img = cv2.imread(img_local_path, cv2.IMREAD_COLOR) print(img.shape) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) ###Output _____no_output_____ ###Markdown Next, we show the image that we randomly selected. ###Code fig, ax = plt.subplots(figsize=(img.shape[1] // 50, img.shape[0] // 50)) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Next, we invoke the Amazon SageMaker Endpoint to detect objects in the test image that we randomly selected.This REST API endpoint only accepts HTTP POST requests with `ContentType` set to `application/json`. The content of the POST request must conform to following JSON schema:`{ "img_id": "YourImageId", "img_data": "Base64 encoded image file content, encoded as utf-8 string" }`The response of the POST request conforms to following JSON schema:`{ "annotations": [ { "bbox": [X, Y, width, height], "category_id": "class id", "category_name": "class name", "segmentation": { "counts": [ run-length-encoding, ], "size": [height, width]} }, ] }` ###Code import boto3 import base64 import json client = boto3.client("sagemaker-runtime") with open(img_local_path, "rb") as image_file: img_data = base64.b64encode(image_file.read()) data = {"img_id": img_id} data["img_data"] = img_data.decode("utf-8") body = json.dumps(data).encode("utf-8") response = client.invoke_endpoint( EndpointName=endpoint_name, ContentType="application/json", Accept="application/json", Body=body ) body = response["Body"].read() msg = body.decode("utf-8") data = json.loads(msg) assert data is not None ###Output _____no_output_____ ###Markdown The response from the endpoint includes annotations for the detected objects in COCO annotations format. Next, we aplly all the detection results to the image. ###Code annotations = data["annotations"] show_detection_results(img, annotations) ###Output _____no_output_____ ###Markdown Delete SageMaker Endpoint, Endpoint Config and ModelIf you are done testing, delete the deployed Amazon SageMaker endpoint, endpoint config, and the model below. The trained model in S3. bucket is not deleted. If you are not done testing, go back to the section Visualize Detection Results and select another test image. ###Code sagemaker_session.delete_endpoint(endpoint_name=endpoint_name) sagemaker_session.delete_endpoint_config(endpoint_config_name=endpoint_config_name) sagemaker_session.delete_model(model_name=model_name) ###Output _____no_output_____ ###Markdown Mask-RCNN Model Inference in Amazon SageMakerThis notebook is a step-by-step tutorial on [Mask R-CNN](https://arxiv.org/abs/1703.06870) model inference using [Amazon SageMaker model deployment hosting service](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html).To get started, we initialize an Amazon execution role and initialize a `boto3` session to find our AWS region name. ###Code import boto3 import sagemaker from sagemaker import get_execution_role role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role print(f'SageMaker Execution Role:{role}') session = boto3.session.Session() aws_region = session.region_name print(f'AWS region:{aws_region}') ###Output _____no_output_____ ###Markdown Build and Push Amazon SageMaker Serving Container ImagesFor this step, the [IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached to this notebook instance needs full access to [Amazon ECR service](https://aws.amazon.com/ecr/). If you created this notebook instance using the ```./stack-sm.sh``` script in this repository, the IAM Role attached to this notebook instance is already setup with full access to Amazon ECR service. Below, we have a choice of two different models for doing inference:1. [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN)2. [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow)It is recommended that you build and push both Amazon SageMaker serving container images below and use one of the two container images for serving the model from an Amazon SageMaker Endpoint. Build and Push TensorPack Faster-RCNN/Mask-RCNN Serving Container ImageUse ```./container-serving/build_tools/build_and_push.sh``` script to build and push the [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) serving container image to Amazon ECR. ###Code !cat ./container-serving/build_tools/build_and_push.sh ###Output _____no_output_____ ###Markdown Using your *AWS region* as argument, run the cell below. ###Code %%time ! ./container-serving/build_tools/build_and_push.sh {aws_region} ###Output _____no_output_____ ###Markdown Set ```tensorpack_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code tensorpack_image = # mask-rcnn-tensorpack-serving-sagemaker ECR URI ###Output _____no_output_____ ###Markdown Build and Push AWS Samples Mask R-CNN Serving Container ImageUse ```./container-serving-optimized/build_tools/build_and_push.sh``` script to build and push the [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container image to Amazon ECR. ###Code !cat ./container-serving-optimized/build_tools/build_and_push.sh ###Output _____no_output_____ ###Markdown Using your *AWS region* as argument, run the cell below. ###Code %%time ! ./container-serving-optimized/build_tools/build_and_push.sh {aws_region} ###Output _____no_output_____ ###Markdown Set ```aws_samples_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code aws_samples_image = # mask-rcnn-tensorflow-serving-sagemaker ECR URI ###Output _____no_output_____ ###Markdown Select Serving Container ImageAbove, we built and pushed [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) and [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container images to Amazon ECR. Now we are ready to deploy our trained model to an Amazon SageMaker Endpoint using one of the two container images.Next, we set ```serving_image``` to either the `tensorpack_image` or the `aws_samples_image` variable you defined above, making sure that the serving container image we set below matches our trained model. ###Code serving_image = # set to tensorpack_image or aws_samples_image variable (no string quotes) print(f'serving image: {serving_image}') ###Output _____no_output_____ ###Markdown Create Amazon SageMaker Session Next, we create a SageMaker session. ###Code sagemaker_session = sagemaker.session.Session(boto_session=session) ###Output _____no_output_____ ###Markdown Define Amazon SageMaker ModelNext, we define an Amazon SageMaker Model that defines the deployed model we will serve from an Amazon SageMaker Endpoint. ###Code model_name= 'mask-rcnn-model-1'# Name of the model ###Output _____no_output_____ ###Markdown This model assumes you are using ResNet-50 pre-trained model weights for the ResNet backbone. If this is not true, please adjust `PRETRAINED_MODEL` value below. Please ensure that the `s3_model_url` of your trained model used below is consistent with the container `serving_image` you set above. ###Code s3_model_url = # Trained Model Amazon S3 URI in the format s3://<your path>/model.tar.gz serving_container_def = { 'Image': serving_image, 'ModelDataUrl': s3_model_url, 'Mode': 'SingleModel', 'Environment': { 'SM_MODEL_DIR' : '/opt/ml/model', 'RESNET_ARCH': 'resnet50' # 'resnet50' or 'resnet101' } } create_model_response = sagemaker_session.create_model(name=model_name, role=role, container_defs=serving_container_def) print(create_model_response) ###Output _____no_output_____ ###Markdown Next, we set the name of the Amaozn SageMaker hosted service endpoint configuration. ###Code endpoint_config_name=f'{model_name}-endpoint-config' print(endpoint_config_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker hosted service endpoint configuration that uses one instance of `ml.p3.2xlarge` to serve the model. ###Code epc = sagemaker_session.create_endpoint_config( name=endpoint_config_name, model_name=model_name, initial_instance_count=1, instance_type='ml.g4dn.xlarge') print(epc) ###Output _____no_output_____ ###Markdown Next we specify the Amazon SageMaker endpoint name for the endpoint used to serve the model. ###Code endpoint_name=f'{model_name}-endpoint' print(endpoint_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker endpoint using the endpoint configuration we created above. ###Code ep=sagemaker_session.create_endpoint(endpoint_name=endpoint_name, config_name=endpoint_config_name, wait=True) print(ep) ###Output _____no_output_____ ###Markdown Now that the Amazon SageMaker endpoint is in service, we will use the endpoint to do inference for test images. Next, we download [COCO 2017 Test images](http://cocodataset.org/download). ###Code !wget -O ./test2017.zip http://images.cocodataset.org/zips/test2017.zip ###Output _____no_output_____ ###Markdown We extract the downloaded COCO 2017 Test images to the home directory. ###Code !unzip -q ./test2017.zip !rm ./test2017.zip ###Output _____no_output_____ ###Markdown Below, we will use the downloaded COCO 2017 Test images to test our deployed Mask R-CNN model. However, in order to visualize the detection results, we need to define some helper functions. Visualization Helper FunctionsNext, we define a helper function to convert COCO Run Length Encoding (RLE) to a binary image mask. The RLE encoding is a dictionary with two keys `counts` and `size`. The `counts` value is a list of counts of run lengths of alternating 0s and 1s for an image binary mask for a specific instance segmentation, with the image is scanned row-wise. The `counts` list starts with a count of 0s. If the binary mask value at `(0,0)` pixel is 1, then the `counts` list starts with a `0`. The `size` value is a list containing image height and width. ###Code import numpy as np def rle_to_binary_mask(rle, img_shape): value = 0 mask_array = [] for count in rle: mask_array.extend([int(value)]*count) value = (value + 1) % 2 assert len(mask_array) == img_shape[0]*img_shape[1] b_mask = np.array(mask_array, dtype=np.uint8).reshape(img_shape) return b_mask ###Output _____no_output_____ ###Markdown Next, we define a helper function for generating random colors for visualizing detection results. ###Code import colorsys import random def random_colors(N, bright=False): brightness = 1.0 if bright else 0.7 hsv = [(i / N, 1, brightness) for i in range(N)] colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv)) random.shuffle(colors) return colors ###Output _____no_output_____ ###Markdown Next, we define a helper function to apply an image binary mask for an instance segmentation to the image. Each image binary mask is of the size of the image. ###Code def apply_mask(image, mask, color, alpha=0.5): a_mask = np.stack([mask]*3, axis=2).astype(np.int8) for c in range(3): image[:, :, c] = np.where(mask == 1, image[:, :, c] *(1 - alpha) + alpha * color[c]*255,image[:, :, c]) return image ###Output _____no_output_____ ###Markdown Next, we define a helper function to show the applied detection results. ###Code import matplotlib.pyplot as plt from matplotlib import patches def show_detection_results(img=None, annotations=None): """ img: image numpy array annotations: annotations array for image where each annotation is in COCO format """ num_annotations = len(annotations) colors = random_colors(num_annotations) fig,ax = plt.subplots(figsize=(img.shape[1]//50, img.shape[0]//50)) for i, a in enumerate(annotations): segm = a['segmentation'] img_shape = tuple(segm['size']) rle = segm['counts'] binary_image_mask = rle_to_binary_mask(rle, img_shape) bbox = a['bbox'] category_id = a['category_id'] category_name = a['category_name'] # select color from random colors color = colors[i] # Show bounding box bbox_x, bbox_y, bbox_w, bbox_h = bbox box_patch = patches.Rectangle((bbox_x, bbox_y), bbox_w, bbox_h, linewidth=1, alpha=0.7, linestyle="dashed", edgecolor=color, facecolor='none') ax.add_patch(box_patch) label = f'{category_name}:{category_id}' ax.text(bbox_x, bbox_y + 8, label, color='w', size=11, backgroundcolor="none") # Show mask img = apply_mask(img, binary_image_mask.astype(np.bool), color) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Visualize Detection ResultsNext, we select a random image from COCO 2017 Test image dataset. After you are done visualizing the detection results for this image, you can come back to the cell below and select your next random image to test. ###Code import os import random test2017_dir=os.path.join(".", "test2017") img_id=random.choice(os.listdir(test2017_dir)) img_local_path = os.path.join(test2017_dir,img_id) print(img_local_path) ###Output _____no_output_____ ###Markdown Next, we read the image and convert it from BGR color to RGB color format. ###Code import cv2 img=cv2.imread(img_local_path, cv2.IMREAD_COLOR) print(img.shape) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) ###Output _____no_output_____ ###Markdown Next, we show the image that we randomly selected. ###Code fig,ax = plt.subplots(figsize=(img.shape[1]//50, img.shape[0]//50)) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Next, we invoke the Amazon SageMaker Endpoint to detect objects in the test image that we randomly selected.This REST API endpoint only accepts HTTP POST requests with `ContentType` set to `application/json`. The content of the POST request must conform to following JSON schema:`{ "img_id": "YourImageId", "img_data": "Base64 encoded image file content, encoded as utf-8 string" }`The response of the POST request conforms to following JSON schema:`{ "annotations": [ { "bbox": [X, Y, width, height], "category_id": "class id", "category_name": "class name", "segmentation": { "counts": [ run-length-encoding, ], "size": [height, width]} }, ] }` ###Code import boto3 import base64 import json client = boto3.client('sagemaker-runtime') with open(img_local_path, "rb") as image_file: img_data = base64.b64encode(image_file.read()) data = {"img_id": img_id} data["img_data"] = img_data.decode('utf-8') body=json.dumps(data).encode('utf-8') response = client.invoke_endpoint(EndpointName=endpoint_name, ContentType="application/json", Accept="application/json", Body=body) body=response['Body'].read() msg=body.decode('utf-8') data=json.loads(msg) assert data is not None ###Output _____no_output_____ ###Markdown The response from the endpoint includes annotations for the detected objects in COCO annotations format. Next, we aplly all the detection results to the image. ###Code annotations = data['annotations'] show_detection_results(img, annotations) ###Output _____no_output_____ ###Markdown Delete SageMaker Endpoint, Endpoint Config and ModelIf you are done testing, delete the deployed Amazon SageMaker endpoint, endpoint config, and the model below. The trained model in S3. bucket is not deleted. If you are not done testing, go back to the section Visualize Detection Results and select another test image. ###Code sagemaker_session.delete_endpoint(endpoint_name=endpoint_name) sagemaker_session.delete_endpoint_config(endpoint_config_name=endpoint_config_name) sagemaker_session.delete_model(model_name=model_name) ###Output _____no_output_____ ###Markdown Mask-RCNN Model Inference in Amazon SageMakerThis notebook is a step-by-step tutorial on [Mask R-CNN](https://arxiv.org/abs/1703.06870) model inference using [Amazon SageMaker model deployment hosting service](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html).To get started, we initialize an Amazon execution role and initialize a `boto3` session to find our AWS region name. ###Code import boto3 import sagemaker from sagemaker import get_execution_role role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role print(f'SageMaker Execution Role:{role}') session = boto3.session.Session() aws_region = session.region_name print(f'AWS region:{aws_region}') ###Output _____no_output_____ ###Markdown Build and Push Amazon SageMaker Serving Container ImagesFor this step, the [IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached to this notebook instance needs full access to [Amazon ECR service](https://aws.amazon.com/ecr/). If you created this notebook instance using the ```./stack-sm.sh``` script in this repository, the IAM Role attached to this notebook instance is already setup with full access to Amazon ECR service. Below, we have a choice of two different models for doing inference:1. [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN)2. [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow)It is recommended that you build and push both Amazon SageMaker serving container images below and use one of the two container images for serving the model from an Amazon SageMaker Endpoint. Build and Push TensorPack Faster-RCNN/Mask-RCNN Serving Container ImageUse ```./container-serving/build_tools/build_and_push.sh``` script to build and push the [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) serving container image to Amazon ECR. ###Code !cat ./container-serving/build_tools/build_and_push.sh ###Output _____no_output_____ ###Markdown Using your *AWS region* as argument, run the cell below. ###Code %%time ! ./container-serving/build_tools/build_and_push.sh {aws_region} ###Output _____no_output_____ ###Markdown Set ```tensorpack_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code tensorpack_image = #<amazon-ecr-uri> ###Output _____no_output_____ ###Markdown Build and Push AWS Samples Mask R-CNN Serving Container ImageUse ```./container-serving-optimized/build_tools/build_and_push.sh``` script to build and push the [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container image to Amazon ECR. ###Code !cat ./container-serving-optimized/build_tools/build_and_push.sh ###Output _____no_output_____ ###Markdown Using your *AWS region* as argument, run the cell below. ###Code %%time ! ./container-serving-optimized/build_tools/build_and_push.sh {aws_region} ###Output _____no_output_____ ###Markdown Set ```aws_samples_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code aws_samples_image = #<amazon-ecr-uri> ###Output _____no_output_____ ###Markdown Select Serving Container ImageAbove, we built and pushed [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) and [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container images to Amazon ECR. Now we are ready to deploy our trained model to an Amazon SageMaker Endpoint using one of the two container images.Next, we set ```serving_image``` to either the `tensorpack_image` or the `aws_samples_image` variable you defined above, making sure that the serving container image we set below matches our trained model. ###Code serving_image = # set to tensorpack_image or aws_samples_image variable (no string quotes) print(f'serving image: {serving_image}') ###Output _____no_output_____ ###Markdown Create Amazon SageMaker Session Next, we create a SageMaker session. ###Code sagemaker_session = sagemaker.session.Session(boto_session=session) ###Output _____no_output_____ ###Markdown Define Amazon SageMaker ModelNext, we define an Amazon SageMaker Model that defines the deployed model we will serve from an Amazon SageMaker Endpoint. ###Code model_name= 'mask-rcnn-model-1'# Name of the model ###Output _____no_output_____ ###Markdown This model assumes you are using ResNet-50 pre-trained model weights for the ResNet backbone. If this is not true, please adjust `PRETRAINED_MODEL` value below. Please ensure that the `s3_model_url` of your trained model used below is consistent with the container `serving_image` you set above. ###Code s3_model_url = # Trained Model Amazon S3 URI in the format s3://<your path>/model.tar.gz serving_container_def = { 'Image': serving_image, 'ModelDataUrl': s3_model_url, 'Mode': 'SingleModel', 'Environment': { 'SM_MODEL_DIR' : '/opt/ml/model', 'PRETRAINED_MODEL': '/ImageNet-R50-AlignPadding.npz'} } create_model_response = sagemaker_session.create_model(name=model_name, role=role, container_defs=serving_container_def) print(create_model_response) ###Output _____no_output_____ ###Markdown Next, we set the name of the Amaozn SageMaker hosted service endpoint configuration. ###Code endpoint_config_name=f'{model_name}-endpoint-config' print(endpoint_config_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker hosted service endpoint configuration that uses one instance of `ml.p3.2xlarge` to serve the model. ###Code epc = sagemaker_session.create_endpoint_config( name=endpoint_config_name, model_name=model_name, initial_instance_count=1, instance_type='ml.p3.2xlarge') print(epc) ###Output _____no_output_____ ###Markdown Next we specify the Amazon SageMaker endpoint name for the endpoint used to serve the model. ###Code endpoint_name=f'{model_name}-endpoint' print(endpoint_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker endpoint using the endpoint configuration we created above. ###Code ep=sagemaker_session.create_endpoint(endpoint_name=endpoint_name, config_name=epc, wait=True) print(ep) ###Output _____no_output_____ ###Markdown Now that the Amazon SageMaker endpoint is in service, we will use the endpoint to do inference for test images. Next, we download [COCO 2017 Test images](http://cocodataset.org/download). ###Code !wget -O ~/test2017.zip http://images.cocodataset.org/zips/test2017.zip ###Output _____no_output_____ ###Markdown We extract the downloaded COCO 2017 Test images to the home directory. ###Code !unzip -q -d ~/ ~/test2017.zip !rm ~/test2017.zip ###Output _____no_output_____ ###Markdown Below, we will use the downloaded COCO 2017 Test images to test our deployed Mask R-CNN model. However, in order to visualize the detection results, we need to define some helper functions. Visualization Helper FunctionsNext, we define a helper function to convert COCO Run Length Encoding (RLE) to a binary image mask. The RLE encoding is a dictionary with two keys `counts` and `size`. The `counts` value is a list of counts of run lengths of alternating 0s and 1s for an image binary mask for a specific instance segmentation, with the image is scanned row-wise. The `counts` list starts with a count of 0s. If the binary mask value at `(0,0)` pixel is 1, then the `counts` list starts with a `0`. The `size` value is a list containing image height and width. ###Code import numpy as np def rle_to_binary_mask(rle, img_shape): value = 0 mask_array = [] for count in rle: mask_array.extend([int(value)]*count) value = (value + 1) % 2 assert len(mask_array) == img_shape[0]*img_shape[1] b_mask = np.array(mask_array, dtype=np.uint8).reshape(img_shape) return b_mask ###Output _____no_output_____ ###Markdown Next, we define a helper function for generating random colors for visualizing detection results. ###Code import colorsys import random def random_colors(N, bright=False): brightness = 1.0 if bright else 0.7 hsv = [(i / N, 1, brightness) for i in range(N)] colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv)) random.shuffle(colors) return colors ###Output _____no_output_____ ###Markdown Next, we define a helper function to apply an image binary mask for an instance segmentation to the image. Each image binary mask is of the size of the image. ###Code def apply_mask(image, mask, color, alpha=0.5): a_mask = np.stack([mask]*3, axis=2).astype(np.int8) for c in range(3): image[:, :, c] = np.where(mask == 1, image[:, :, c] *(1 - alpha) + alpha * color[c]*255,image[:, :, c]) return image ###Output _____no_output_____ ###Markdown Next, we define a helper function to show the applied detection results. ###Code import matplotlib.pyplot as plt from matplotlib import patches def show_detection_results(img=None, annotations=None): """ img: image numpy array annotations: annotations array for image where each annotation is in COCO format """ num_annotations = len(annotations) colors = random_colors(num_annotations) fig,ax = plt.subplots(figsize=(img.shape[1]//50, img.shape[0]//50)) for i, a in enumerate(annotations): segm = a['segmentation'] img_shape = tuple(segm['size']) rle = segm['counts'] binary_image_mask = rle_to_binary_mask(rle, img_shape) bbox = a['bbox'] category_id = a['category_id'] category_name = a['category_name'] # select color from random colors color = colors[i] # Show bounding box bbox_x, bbox_y, bbox_w, bbox_h = bbox box_patch = patches.Rectangle((bbox_x, bbox_y), bbox_w, bbox_h, linewidth=1, alpha=0.7, linestyle="dashed", edgecolor=color, facecolor='none') ax.add_patch(box_patch) label = f'{category_name}:{category_id}' ax.text(bbox_x, bbox_y + 8, label, color='w', size=11, backgroundcolor="none") # Show mask img = apply_mask(img, binary_image_mask.astype(np.bool), color) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Visualize Detection ResultsNext, we select a random image from COCO 2017 Test image dataset. After you are done visualizing the detection results for this image, you can come back to the cell below and select your next random image to test. ###Code import os import random test2017_dir=os.path.join(os.environ['HOME'], "test2017") img_id=random.choice(os.listdir(test2017_dir)) img_local_path = os.path.join(test2017_dir,img_id) print(img_local_path) ###Output _____no_output_____ ###Markdown Next, we read the image and convert it from BGR color to RGB color format. ###Code import cv2 img=cv2.imread(img_local_path, cv2.IMREAD_COLOR) print(img.shape) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) ###Output _____no_output_____ ###Markdown Next, we show the image that we randomly selected. ###Code fig,ax = plt.subplots(figsize=(img.shape[1]//50, img.shape[0]//50)) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Next, we invoke the Amazon SageMaker Endpoint to detect objects in the test image that we randomly selected.This REST API endpoint only accepts HTTP POST requests with `ContentType` set to `application/json`. The content of the POST request must conform to following JSON schema:`{ "img_id": "YourImageId", "img_data": "Base64 encoded image file content, encoded as utf-8 string" }`The response of the POST request conforms to following JSON schema:`{ "annotations": [ { "bbox": [X, Y, width, height], "category_id": "class id", "category_name": "class name", "segmentation": { "counts": [ run-length-encoding, ], "size": [height, width]} }, ] }` ###Code import boto3 import base64 import json client = boto3.client('sagemaker-runtime') with open(img_local_path, "rb") as image_file: img_data = base64.b64encode(image_file.read()) data = {"img_id": img_id} data["img_data"] = img_data.decode('utf-8') body=json.dumps(data).encode('utf-8') response = client.invoke_endpoint(EndpointName=endpoint_name, ContentType="application/json", Accept="application/json", Body=body) body=response['Body'].read() msg=body.decode('utf-8') data=json.loads(msg) assert data is not None ###Output _____no_output_____ ###Markdown The response from the endpoint includes annotations for the detected objects in COCO annotations format. Next, we aplly all the detection results to the image. ###Code annotations = data['annotations'] show_detection_results(img, annotations) ###Output _____no_output_____ ###Markdown Delete SageMaker Endpoint, Endpoint Config and ModelIf you are done testing, delete the deployed Amazon SageMaker endpoint, endpoint config, and the model below. The trained model in S3. bucket is not deleted. If you are not done testing, go back to the section Visualize Detection Results and select another test image. ###Code sagemaker_session.delete_endpoint(endpoint_name=endpoint_name) sagemaker_session.delete_endpoint_config(endpoint_config_name=endpoint_config_name) sagemaker_session.delete_model(model_name=model_name) ###Output _____no_output_____ ###Markdown Mask-RCNN Model Inference in Amazon SageMakerThis notebook is a step-by-step tutorial on [Mask R-CNN](https://arxiv.org/abs/1703.06870) model inference using [Amazon SageMaker model deployment hosting service](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html).To get started, we initialize an Amazon execution role and initialize a `boto3` session to find our AWS region name. ###Code import boto3 import sagemaker from sagemaker import get_execution_role role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role print(f'SageMaker Execution Role:{role}') session = boto3.session.Session() aws_region = session.region_name print(f'AWS region:{aws_region}') ###Output _____no_output_____ ###Markdown Build and Push Amazon SageMaker Serving Container ImagesFor this step, the [IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached to this notebook instance needs full access to [Amazon ECR service](https://aws.amazon.com/ecr/). If you created this notebook instance using the ```./stack-sm.sh``` script in this repository, the IAM Role attached to this notebook instance is already setup with full access to Amazon ECR service. Below, we have a choice of two different models for doing inference:1. [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN)2. [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow)It is recommended that you build and push both Amazon SageMaker serving container images below and use one of the two container images for serving the model from an Amazon SageMaker Endpoint. Build and Push TensorPack Faster-RCNN/Mask-RCNN Serving Container ImageUse ```./container-serving/build_tools/build_and_push.sh``` script to build and push the [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) serving container image to Amazon ECR. ###Code !cat ./container-serving/build_tools/build_and_push.sh ###Output _____no_output_____ ###Markdown Using your *AWS region* as argument, run the cell below. ###Code %%time ! ./container-serving/build_tools/build_and_push.sh {aws_region} ###Output _____no_output_____ ###Markdown Set ```tensorpack_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code tensorpack_image = # mask-rcnn-tensorpack-serving-sagemaker ECR URI ###Output _____no_output_____ ###Markdown Build and Push AWS Samples Mask R-CNN Serving Container ImageUse ```./container-serving-optimized/build_tools/build_and_push.sh``` script to build and push the [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container image to Amazon ECR. ###Code !cat ./container-serving-optimized/build_tools/build_and_push.sh ###Output _____no_output_____ ###Markdown Using your *AWS region* as argument, run the cell below. ###Code %%time ! ./container-serving-optimized/build_tools/build_and_push.sh {aws_region} ###Output _____no_output_____ ###Markdown Set ```aws_samples_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code aws_samples_image = # mask-rcnn-tensorflow-serving-sagemaker ECR URI ###Output _____no_output_____ ###Markdown Select Serving Container ImageAbove, we built and pushed [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) and [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container images to Amazon ECR. Now we are ready to deploy our trained model to an Amazon SageMaker Endpoint using one of the two container images.Next, we set ```serving_image``` to either the `tensorpack_image` or the `aws_samples_image` variable you defined above, making sure that the serving container image we set below matches our trained model. ###Code serving_image = # set to tensorpack_image or aws_samples_image variable (no string quotes) print(f'serving image: {serving_image}') ###Output _____no_output_____ ###Markdown Create Amazon SageMaker Session Next, we create a SageMaker session. ###Code sagemaker_session = sagemaker.session.Session(boto_session=session) ###Output _____no_output_____ ###Markdown Define Amazon SageMaker ModelNext, we define an Amazon SageMaker Model that defines the deployed model we will serve from an Amazon SageMaker Endpoint. ###Code model_name= 'mask-rcnn-model-1'# Name of the model ###Output _____no_output_____ ###Markdown This model assumes you are using ResNet-50 pre-trained model weights for the ResNet backbone. If this is not true, please adjust `PRETRAINED_MODEL` value below. Please ensure that the `s3_model_url` of your trained model used below is consistent with the container `serving_image` you set above. ###Code s3_model_url = # Trained Model Amazon S3 URI in the format s3://<your path>/model.tar.gz serving_container_def = { 'Image': serving_image, 'ModelDataUrl': s3_model_url, 'Mode': 'SingleModel', 'Environment': { 'SM_MODEL_DIR' : '/opt/ml/model', 'RESNET_ARCH': 'resnet50' # 'resnet50' or 'resnet101' } } create_model_response = sagemaker_session.create_model(name=model_name, role=role, container_defs=serving_container_def) print(create_model_response) ###Output _____no_output_____ ###Markdown Next, we set the name of the Amaozn SageMaker hosted service endpoint configuration. ###Code endpoint_config_name=f'{model_name}-endpoint-config' print(endpoint_config_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker hosted service endpoint configuration that uses one instance of `ml.p3.2xlarge` to serve the model. ###Code epc = sagemaker_session.create_endpoint_config( name=endpoint_config_name, model_name=model_name, initial_instance_count=1, instance_type='ml.g4dn.xlarge') print(epc) ###Output _____no_output_____ ###Markdown Next we specify the Amazon SageMaker endpoint name for the endpoint used to serve the model. ###Code endpoint_name=f'{model_name}-endpoint' print(endpoint_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker endpoint using the endpoint configuration we created above. ###Code ep=sagemaker_session.create_endpoint(endpoint_name=endpoint_name, config_name=endpoint_config_name, wait=True) print(ep) ###Output _____no_output_____ ###Markdown Now that the Amazon SageMaker endpoint is in service, we will use the endpoint to do inference for test images. Next, we download [COCO 2017 Test images](http://cocodataset.org/download). ###Code !wget -O ./test2017.zip http://images.cocodataset.org/zips/test2017.zip ###Output _____no_output_____ ###Markdown We extract the downloaded COCO 2017 Test images to the home directory. ###Code !unzip -q ./test2017.zip !rm ./test2017.zip ###Output _____no_output_____ ###Markdown Below, we will use the downloaded COCO 2017 Test images to test our deployed Mask R-CNN model. However, in order to visualize the detection results, we need to define some helper functions. Visualization Helper FunctionsNext, we define a helper function to convert COCO Run Length Encoding (RLE) to a binary image mask. The RLE encoding is a dictionary with two keys `counts` and `size`. The `counts` value is a list of counts of run lengths of alternating 0s and 1s for an image binary mask for a specific instance segmentation, with the image is scanned row-wise. The `counts` list starts with a count of 0s. If the binary mask value at `(0,0)` pixel is 1, then the `counts` list starts with a `0`. The `size` value is a list containing image height and width. ###Code import numpy as np def rle_to_binary_mask(rle, img_shape): value = 0 mask_array = [] for count in rle: mask_array.extend([int(value)]*count) value = (value + 1) % 2 assert len(mask_array) == img_shape[0]*img_shape[1] b_mask = np.array(mask_array, dtype=np.uint8).reshape(img_shape) return b_mask ###Output _____no_output_____ ###Markdown Next, we define a helper function for generating random colors for visualizing detection results. ###Code import colorsys import random def random_colors(N, bright=False): brightness = 1.0 if bright else 0.7 hsv = [(i / N, 1, brightness) for i in range(N)] colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv)) random.shuffle(colors) return colors ###Output _____no_output_____ ###Markdown Next, we define a helper function to apply an image binary mask for an instance segmentation to the image. Each image binary mask is of the size of the image. ###Code def apply_mask(image, mask, color, alpha=0.5): a_mask = np.stack([mask]*3, axis=2).astype(np.int8) for c in range(3): image[:, :, c] = np.where(mask == 1, image[:, :, c] *(1 - alpha) + alpha * color[c]*255,image[:, :, c]) return image ###Output _____no_output_____ ###Markdown Next, we define a helper function to show the applied detection results. ###Code import matplotlib.pyplot as plt from matplotlib import patches def show_detection_results(img=None, annotations=None): """ img: image numpy array annotations: annotations array for image where each annotation is in COCO format """ num_annotations = len(annotations) colors = random_colors(num_annotations) fig,ax = plt.subplots(figsize=(img.shape[1]//50, img.shape[0]//50)) for i, a in enumerate(annotations): segm = a['segmentation'] img_shape = tuple(segm['size']) rle = segm['counts'] binary_image_mask = rle_to_binary_mask(rle, img_shape) bbox = a['bbox'] category_id = a['category_id'] category_name = a['category_name'] # select color from random colors color = colors[i] # Show bounding box bbox_x, bbox_y, bbox_w, bbox_h = bbox box_patch = patches.Rectangle((bbox_x, bbox_y), bbox_w, bbox_h, linewidth=1, alpha=0.7, linestyle="dashed", edgecolor=color, facecolor='none') ax.add_patch(box_patch) label = f'{category_name}:{category_id}' ax.text(bbox_x, bbox_y + 8, label, color='w', size=11, backgroundcolor="none") # Show mask img = apply_mask(img, binary_image_mask.astype(np.bool), color) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Visualize Detection ResultsNext, we select a random image from COCO 2017 Test image dataset. After you are done visualizing the detection results for this image, you can come back to the cell below and select your next random image to test. ###Code import os import random test2017_dir=os.path.join(".", "test2017") img_id=random.choice(os.listdir(test2017_dir)) img_local_path = os.path.join(test2017_dir,img_id) print(img_local_path) ###Output _____no_output_____ ###Markdown Next, we read the image and convert it from BGR color to RGB color format. ###Code import cv2 img=cv2.imread(img_local_path, cv2.IMREAD_COLOR) print(img.shape) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) ###Output _____no_output_____ ###Markdown Next, we show the image that we randomly selected. ###Code fig,ax = plt.subplots(figsize=(img.shape[1]//50, img.shape[0]//50)) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Next, we invoke the Amazon SageMaker Endpoint to detect objects in the test image that we randomly selected.This REST API endpoint only accepts HTTP POST requests with `ContentType` set to `application/json`. The content of the POST request must conform to following JSON schema:`{ "img_id": "YourImageId", "img_data": "Base64 encoded image file content, encoded as utf-8 string" }`The response of the POST request conforms to following JSON schema:`{ "annotations": [ { "bbox": [X, Y, width, height], "category_id": "class id", "category_name": "class name", "segmentation": { "counts": [ run-length-encoding, ], "size": [height, width]} }, ] }` ###Code import boto3 import base64 import json client = boto3.client('sagemaker-runtime') with open(img_local_path, "rb") as image_file: img_data = base64.b64encode(image_file.read()) data = {"img_id": img_id} data["img_data"] = img_data.decode('utf-8') body=json.dumps(data).encode('utf-8') response = client.invoke_endpoint(EndpointName=endpoint_name, ContentType="application/json", Accept="application/json", Body=body) body=response['Body'].read() msg=body.decode('utf-8') data=json.loads(msg) assert data is not None ###Output _____no_output_____ ###Markdown The response from the endpoint includes annotations for the detected objects in COCO annotations format. Next, we aplly all the detection results to the image. ###Code annotations = data['annotations'] show_detection_results(img, annotations) ###Output _____no_output_____ ###Markdown Delete SageMaker Endpoint, Endpoint Config and ModelIf you are done testing, delete the deployed Amazon SageMaker endpoint, endpoint config, and the model below. The trained model in S3. bucket is not deleted. If you are not done testing, go back to the section Visualize Detection Results and select another test image. ###Code sagemaker_session.delete_endpoint(endpoint_name=endpoint_name) sagemaker_session.delete_endpoint_config(endpoint_config_name=endpoint_config_name) sagemaker_session.delete_model(model_name=model_name) ###Output _____no_output_____ ###Markdown Mask-RCNN Model Inference in Amazon SageMakerThis notebook is a step-by-step tutorial on [Mask R-CNN](https://arxiv.org/abs/1703.06870) model inference using [Amazon SageMaker model deployment hosting service](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html).To get started, we initialize an Amazon execution role and initialize a `boto3` session to find our AWS region name. ###Code import boto3 import sagemaker from sagemaker import get_execution_role role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role print(f'SageMaker Execution Role:{role}') session = boto3.session.Session() aws_region = session.region_name print(f'AWS region:{aws_region}') ###Output _____no_output_____ ###Markdown Build and Push Amazon SageMaker Serving Container ImagesFor this step, the [IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached to this notebook instance needs full access to [Amazon ECR service](https://aws.amazon.com/ecr/). If you created this notebook instance using the ```./stack-sm.sh``` script in this repository, the IAM Role attached to this notebook instance is already setup with full access to Amazon ECR service. Below, we have a choice of two different models for doing inference:1. [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN)2. [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow)It is recommended that you build and push both Amazon SageMaker serving container images below and use one of the two container images for serving the model from an Amazon SageMaker Endpoint. Build and Push TensorPack Faster-RCNN/Mask-RCNN Serving Container ImageUse ```./container-serving/build_tools/build_and_push.sh``` script to build and push the [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) serving container image to Amazon ECR. ###Code !cat ./container-serving/build_tools/build_and_push.sh ###Output _____no_output_____ ###Markdown Using your *AWS region* as argument, run the cell below. ###Code %%time ! ./container-serving/build_tools/build_and_push.sh {aws_region} ###Output _____no_output_____ ###Markdown Set ```tensorpack_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code tensorpack_image = # mask-rcnn-tensorpack-serving-sagemaker ECR URI ###Output _____no_output_____ ###Markdown Build and Push AWS Samples Mask R-CNN Serving Container ImageUse ```./container-serving-optimized/build_tools/build_and_push.sh``` script to build and push the [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container image to Amazon ECR. ###Code !cat ./container-serving-optimized/build_tools/build_and_push.sh ###Output _____no_output_____ ###Markdown Using your *AWS region* as argument, run the cell below. ###Code %%time ! ./container-serving-optimized/build_tools/build_and_push.sh {aws_region} ###Output _____no_output_____ ###Markdown Set ```aws_samples_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code aws_samples_image = # mask-rcnn-tensorflow-serving-sagemaker ECR URI ###Output _____no_output_____ ###Markdown Select Serving Container ImageAbove, we built and pushed [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) and [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container images to Amazon ECR. Now we are ready to deploy our trained model to an Amazon SageMaker Endpoint using one of the two container images.Next, we set ```serving_image``` to either the `tensorpack_image` or the `aws_samples_image` variable you defined above, making sure that the serving container image we set below matches our trained model. ###Code serving_image = # set to tensorpack_image or aws_samples_image variable (no string quotes) print(f'serving image: {serving_image}') ###Output _____no_output_____ ###Markdown Create Amazon SageMaker Session Next, we create a SageMaker session. ###Code sagemaker_session = sagemaker.session.Session(boto_session=session) ###Output _____no_output_____ ###Markdown Define Amazon SageMaker ModelNext, we define an Amazon SageMaker Model that defines the deployed model we will serve from an Amazon SageMaker Endpoint. ###Code model_name= 'mask-rcnn-model-1'# Name of the model ###Output _____no_output_____ ###Markdown This model assumes you are using ResNet-50 pre-trained model weights for the ResNet backbone. If this is not true, please adjust `PRETRAINED_MODEL` value below. Please ensure that the `s3_model_url` of your trained model used below is consistent with the container `serving_image` you set above. ###Code s3_model_url = # Trained Model Amazon S3 URI in the format s3://<your path>/model.tar.gz serving_container_def = { 'Image': serving_image, 'ModelDataUrl': s3_model_url, 'Mode': 'SingleModel', 'Environment': { 'SM_MODEL_DIR' : '/opt/ml/model', 'RESNET_ARCH': 'resnet50' # 'resnet50' or 'resnet101' } } create_model_response = sagemaker_session.create_model(name=model_name, role=role, container_defs=serving_container_def) print(create_model_response) ###Output _____no_output_____ ###Markdown Next, we set the name of the Amaozn SageMaker hosted service endpoint configuration. ###Code endpoint_config_name=f'{model_name}-endpoint-config' print(endpoint_config_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker hosted service endpoint configuration that uses one instance of `ml.p3.2xlarge` to serve the model. ###Code epc = sagemaker_session.create_endpoint_config( name=endpoint_config_name, model_name=model_name, initial_instance_count=1, instance_type='ml.g4dn.xlarge') print(epc) ###Output _____no_output_____ ###Markdown Next we specify the Amazon SageMaker endpoint name for the endpoint used to serve the model. ###Code endpoint_name=f'{model_name}-endpoint' print(endpoint_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker endpoint using the endpoint configuration we created above. ###Code ep=sagemaker_session.create_endpoint(endpoint_name=endpoint_name, config_name=endpoint_config_name, wait=True) print(ep) ###Output _____no_output_____ ###Markdown Now that the Amazon SageMaker endpoint is in service, we will use the endpoint to do inference for test images. Next, we download [COCO 2017 Test images](http://cocodataset.org/download). ###Code !wget -O ~/test2017.zip http://images.cocodataset.org/zips/test2017.zip ###Output _____no_output_____ ###Markdown We extract the downloaded COCO 2017 Test images to the home directory. ###Code !unzip -q -d ~/ ~/test2017.zip !rm ~/test2017.zip ###Output _____no_output_____ ###Markdown Below, we will use the downloaded COCO 2017 Test images to test our deployed Mask R-CNN model. However, in order to visualize the detection results, we need to define some helper functions. Visualization Helper FunctionsNext, we define a helper function to convert COCO Run Length Encoding (RLE) to a binary image mask. The RLE encoding is a dictionary with two keys `counts` and `size`. The `counts` value is a list of counts of run lengths of alternating 0s and 1s for an image binary mask for a specific instance segmentation, with the image is scanned row-wise. The `counts` list starts with a count of 0s. If the binary mask value at `(0,0)` pixel is 1, then the `counts` list starts with a `0`. The `size` value is a list containing image height and width. ###Code import numpy as np def rle_to_binary_mask(rle, img_shape): value = 0 mask_array = [] for count in rle: mask_array.extend([int(value)]*count) value = (value + 1) % 2 assert len(mask_array) == img_shape[0]*img_shape[1] b_mask = np.array(mask_array, dtype=np.uint8).reshape(img_shape) return b_mask ###Output _____no_output_____ ###Markdown Next, we define a helper function for generating random colors for visualizing detection results. ###Code import colorsys import random def random_colors(N, bright=False): brightness = 1.0 if bright else 0.7 hsv = [(i / N, 1, brightness) for i in range(N)] colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv)) random.shuffle(colors) return colors ###Output _____no_output_____ ###Markdown Next, we define a helper function to apply an image binary mask for an instance segmentation to the image. Each image binary mask is of the size of the image. ###Code def apply_mask(image, mask, color, alpha=0.5): a_mask = np.stack([mask]*3, axis=2).astype(np.int8) for c in range(3): image[:, :, c] = np.where(mask == 1, image[:, :, c] *(1 - alpha) + alpha * color[c]*255,image[:, :, c]) return image ###Output _____no_output_____ ###Markdown Next, we define a helper function to show the applied detection results. ###Code import matplotlib.pyplot as plt from matplotlib import patches def show_detection_results(img=None, annotations=None): """ img: image numpy array annotations: annotations array for image where each annotation is in COCO format """ num_annotations = len(annotations) colors = random_colors(num_annotations) fig,ax = plt.subplots(figsize=(img.shape[1]//50, img.shape[0]//50)) for i, a in enumerate(annotations): segm = a['segmentation'] img_shape = tuple(segm['size']) rle = segm['counts'] binary_image_mask = rle_to_binary_mask(rle, img_shape) bbox = a['bbox'] category_id = a['category_id'] category_name = a['category_name'] # select color from random colors color = colors[i] # Show bounding box bbox_x, bbox_y, bbox_w, bbox_h = bbox box_patch = patches.Rectangle((bbox_x, bbox_y), bbox_w, bbox_h, linewidth=1, alpha=0.7, linestyle="dashed", edgecolor=color, facecolor='none') ax.add_patch(box_patch) label = f'{category_name}:{category_id}' ax.text(bbox_x, bbox_y + 8, label, color='w', size=11, backgroundcolor="none") # Show mask img = apply_mask(img, binary_image_mask.astype(np.bool), color) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Visualize Detection ResultsNext, we select a random image from COCO 2017 Test image dataset. After you are done visualizing the detection results for this image, you can come back to the cell below and select your next random image to test. ###Code import os import random test2017_dir=os.path.join(os.environ['HOME'], "test2017") img_id=random.choice(os.listdir(test2017_dir)) img_local_path = os.path.join(test2017_dir,img_id) print(img_local_path) ###Output _____no_output_____ ###Markdown Next, we read the image and convert it from BGR color to RGB color format. ###Code import cv2 img=cv2.imread(img_local_path, cv2.IMREAD_COLOR) print(img.shape) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) ###Output _____no_output_____ ###Markdown Next, we show the image that we randomly selected. ###Code fig,ax = plt.subplots(figsize=(img.shape[1]//50, img.shape[0]//50)) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Next, we invoke the Amazon SageMaker Endpoint to detect objects in the test image that we randomly selected.This REST API endpoint only accepts HTTP POST requests with `ContentType` set to `application/json`. The content of the POST request must conform to following JSON schema:`{ "img_id": "YourImageId", "img_data": "Base64 encoded image file content, encoded as utf-8 string" }`The response of the POST request conforms to following JSON schema:`{ "annotations": [ { "bbox": [X, Y, width, height], "category_id": "class id", "category_name": "class name", "segmentation": { "counts": [ run-length-encoding, ], "size": [height, width]} }, ] }` ###Code import boto3 import base64 import json client = boto3.client('sagemaker-runtime') with open(img_local_path, "rb") as image_file: img_data = base64.b64encode(image_file.read()) data = {"img_id": img_id} data["img_data"] = img_data.decode('utf-8') body=json.dumps(data).encode('utf-8') response = client.invoke_endpoint(EndpointName=endpoint_name, ContentType="application/json", Accept="application/json", Body=body) body=response['Body'].read() msg=body.decode('utf-8') data=json.loads(msg) assert data is not None ###Output _____no_output_____ ###Markdown The response from the endpoint includes annotations for the detected objects in COCO annotations format. Next, we aplly all the detection results to the image. ###Code annotations = data['annotations'] show_detection_results(img, annotations) ###Output _____no_output_____ ###Markdown Delete SageMaker Endpoint, Endpoint Config and ModelIf you are done testing, delete the deployed Amazon SageMaker endpoint, endpoint config, and the model below. The trained model in S3. bucket is not deleted. If you are not done testing, go back to the section Visualize Detection Results and select another test image. ###Code sagemaker_session.delete_endpoint(endpoint_name=endpoint_name) sagemaker_session.delete_endpoint_config(endpoint_config_name=endpoint_config_name) sagemaker_session.delete_model(model_name=model_name) ###Output _____no_output_____ ###Markdown Mask-RCNN Model Inference in Amazon SageMakerThis notebook is a step-by-step tutorial on [Mask R-CNN](https://arxiv.org/abs/1703.06870) model inference using [Amazon SageMaker model deployment hosting service](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html).To get started, we initialize an Amazon execution role and initialize a `boto3` session to find our AWS region name. ###Code import boto3 import sagemaker from sagemaker import get_execution_role role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role print(f'SageMaker Execution Role:{role}') session = boto3.session.Session() aws_region = "ap-southeast-1" #session.region_name print(f'AWS region:{aws_region}') ###Output SageMaker Execution Role:arn:aws:iam::393782509758:role/service-role/AmazonSageMaker-ExecutionRole-20191115T140457 AWS region:ap-southeast-1 ###Markdown Build and Push Amazon SageMaker Serving Container ImagesFor this step, the [IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached to this notebook instance needs full access to [Amazon ECR service](https://aws.amazon.com/ecr/). If you created this notebook instance using the ```./stack-sm.sh``` script in this repository, the IAM Role attached to this notebook instance is already setup with full access to Amazon ECR service. Below, we have a choice of two different models for doing inference:1. [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN)2. [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow)It is recommended that you build and push both Amazon SageMaker serving container images below and use one of the two container images for serving the model from an Amazon SageMaker Endpoint. Build and Push TensorPack Faster-RCNN/Mask-RCNN Serving Container ImageUse ```./container-serving/build_tools/build_and_push.sh``` script to build and push the [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) serving container image to Amazon ECR. ###Code !cat ./container-serving/build_tools/build_and_push.sh ###Output #!/usr/bin/env bash # This script shows how to build the Docker image and push it to ECR to be ready for use # by SageMaker. # The argument to this script is the image name. This will be used as the image on the local # machine and combined with the account and region to form the repository name for ECR. DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" source $DIR/set_env.sh # set region region= if [ "$#" -eq 1 ]; then region=$1 else echo "usage: $0 <aws-region>" exit 1 fi image=$IMAGE_NAME tag=$IMAGE_TAG # Get the account number associated with the current IAM credentials account=$(aws sts get-caller-identity --query Account --output text) if [ $? -ne 0 ] then exit 255 fi fullname="${account}.dkr.ecr.${region}.amazonaws.com/${image}:${tag}" # If the repository doesn't exist in ECR, create it. aws ecr describe-repositories --region ${region} --repository-names "${image}" > /dev/null 2>&1 if [ $? -ne 0 ]; then aws ecr create-repository --region ${region} --repository-name "${image}" > /dev/null fi # Build the docker image locally with the image name and then push it to ECR # with the full name. # Get the login command from ECR and execute it directly $(aws ecr get-login --no-include-email --region us-west-2 --registry-ids 763104351884) docker build -t ${image} $DIR/.. docker tag ${image} ${fullname} # Get the login command from ECR and execute it directly $(aws ecr get-login --region ${region} --no-include-email) docker push ${fullname} if [ $? -eq 0 ]; then echo "Amazon ECR URI: ${fullname}" else echo "Error: Image build and push failed" exit 1 fi ###Markdown Using your *AWS region* as argument, run the cell below. ###Code %%time ! ./container-serving/build_tools/build_and_push.sh {aws_region} ###Output Invalid endpoint: https://api.ecr.{aws_region}.amazonaws.com WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded Sending build context to Docker daemon 95.2MB Step 1/27 : FROM 763104351884.dkr.ecr.us-west-2.amazonaws.com/tensorflow-training:1.13-horovod-gpu-py36-cu100-ubuntu16.04-v2.0 ---> 9c51cff02c92 Step 2/27 : ENV TENSORFLOW_VERSION=1.13.2 ---> Using cache ---> cb59c849dccd Step 3/27 : ENV HOROVOD_VERSION=0.16.4 ---> Using cache ---> 61cc5be5aea3 Step 4/27 : RUN pip install --upgrade pip ---> Using cache ---> d21317981669 Step 5/27 : RUN pip install tensorflow-gpu==${TENSORFLOW_VERSION} keras h5py ---> Using cache ---> d5a4313e4c9f Step 6/27 : RUN ldconfig /usr/local/cuda-10.0/targets/x86_64-linux/lib/stubs && HOROVOD_GPU_ALLREDUCE=NCCL HOROVOD_WITH_TENSORFLOW=1 pip install --no-cache-dir horovod==${HOROVOD_VERSION} && ldconfig ---> Using cache ---> 2989248330ad Step 7/27 : RUN apt-get install -y --no-install-recommends openssh-client openssh-server ---> Using cache ---> fe17f50cacaa Step 8/27 : RUN mkdir -p /var/run/sshd && sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd ---> Using cache ---> 6e47d62853cf Step 9/27 : RUN rm -rf /root/.ssh/ && mkdir -p /root/.ssh/ && ssh-keygen -q -t rsa -N '' -f /root/.ssh/id_rsa && cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys && printf "Host *\n StrictHostKeyChecking no\n" >> /root/.ssh/config ---> Using cache ---> a2ad66038dba Step 10/27 : RUN pip install awscli ---> Using cache ---> c92c35ce1847 Step 11/27 : RUN pip install boto3 ---> Using cache ---> 0db1e64c3f08 Step 12/27 : RUN pip install ujson==1.35 ---> Using cache ---> ea9d7ffe4e31 Step 13/27 : RUN pip install opencv-python==4.1.0.25 ---> Using cache ---> 0a0b73d36368 Step 14/27 : RUN pip install Cython==0.28.4 ---> Using cache ---> f820ac692946 Step 15/27 : RUN pip install pycocotools==2.0.0 ---> Using cache ---> 4ace108f6764 Step 16/27 : RUN pip install matplotlib==3.0.3 ---> Using cache ---> eeff93f3d8f0 Step 17/27 : RUN pip install markdown==3.1 ---> Using cache ---> 2c48368eb907 Step 18/27 : RUN pip install numpy==1.17.5 ---> Using cache ---> c8f84cae27bf Step 19/27 : RUN git clone https://github.com/tensorpack/tensorpack.git /tensorpack ---> Using cache ---> f0b1eb8953ca Step 20/27 : RUN cd /tensorpack && git fetch origin 26664c3f1d58ae029ea6c3ba0af6ae11900b1e55 ---> Using cache ---> a975ca59e74c Step 21/27 : RUN cd /tensorpack && git reset --hard 26664c3f1d58ae029ea6c3ba0af6ae11900b1e55 ---> Using cache ---> 472a102a5918 Step 22/27 : RUN pip install -e /tensorpack ---> Using cache ---> 1e036b954f00 Step 23/27 : RUN pip install flask ---> Using cache ---> 331ed1ab5b8c Step 24/27 : RUN apt-get -y update && apt-get install -y --no-install-recommends wget nginx ca-certificates && rm -rf /var/lib/apt/lists/* ---> Using cache ---> f8a9e1501fec Step 25/27 : COPY resources/*.* / ---> Using cache ---> 5991186e3567 Step 26/27 : ENV WORKDIR / ---> Using cache ---> f4215afc8b3a Step 27/27 : ENTRYPOINT ["python", "/serve.py"] ---> Using cache ---> e20a2c8cef97 Successfully built e20a2c8cef97 Successfully tagged mask-rcnn-tensorpack-serving-sagemaker:latest Error parsing reference: "393782509758.dkr.ecr.{aws_region}.amazonaws.com/mask-rcnn-tensorpack-serving-sagemaker:tf1.13-tp26664c3" is not a valid repository/tag: invalid reference format Invalid endpoint: https://api.ecr.{aws_region}.amazonaws.com invalid reference format Error: Image build and push failed CPU times: user 156 ms, sys: 36.1 ms, total: 192 ms Wall time: 7.39 s ###Markdown Set ```tensorpack_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code tensorpack_image = "393782509758.dkr.ecr.ap-southeast-1.amazonaws.com/mask-rcnn-tensorpack-serving-sagemaker:tf1.13-tp26664c3"#<amazon-ecr-uri> ###Output _____no_output_____ ###Markdown Build and Push AWS Samples Mask R-CNN Serving Container ImageUse ```./container-serving-optimized/build_tools/build_and_push.sh``` script to build and push the [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container image to Amazon ECR. ###Code !cat ./container-serving-optimized/build_tools/build_and_push.sh ###Output #!/usr/bin/env bash # This script shows how to build the Docker image and push it to ECR to be ready for use # by SageMaker. # The argument to this script is the image name. This will be used as the image on the local # machine and combined with the account and region to form the repository name for ECR. DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" source $DIR/set_env.sh # set region region= if [ "$#" -eq 1 ]; then region=$1 else echo "usage: $0 <aws-region>" exit 1 fi image=$IMAGE_NAME tag=$IMAGE_TAG # Get the account number associated with the current IAM credentials account=$(aws sts get-caller-identity --query Account --output text) if [ $? -ne 0 ] then exit 255 fi fullname="${account}.dkr.ecr.${region}.amazonaws.com/${image}:${tag}" # If the repository doesn't exist in ECR, create it. aws ecr describe-repositories --region ${region} --repository-names "${image}" > /dev/null 2>&1 if [ $? -ne 0 ]; then aws ecr create-repository --region ${region} --repository-name "${image}" > /dev/null fi # Build the docker image locally with the image name and then push it to ECR # with the full name. # Get the login command from ECR and execute it directly $(aws ecr get-login --no-include-email --region us-west-2 --registry-ids 763104351884) docker build -t ${image} $DIR/.. docker tag ${image} ${fullname} # Get the login command from ECR and execute it directly $(aws ecr get-login --region ${region} --no-include-email) docker push ${fullname} if [ $? -eq 0 ]; then echo "Amazon ECR URI: ${fullname}" else echo "Error: Image build and push failed" exit 1 fi ###Markdown Using your *AWS region* as argument, run the cell below. ###Code ! $(aws ecr get-login --no-include-email --region ap-southeast-1 --registry-ids 393782509758) %%time ! ./container-serving-optimized/build_tools/build_and_push.sh {aws_region} ###Output WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded Sending build context to Docker daemon 95.22MB Step 1/28 : FROM 393782509758.dkr.ecr.ap-southeast-1.amazonaws.com/invoice-extraction:tensorflow-base-1.13.1-gpu-py36-ubuntu-16.04 tensorflow-base-1.13.1-gpu-py36-ubuntu-16.04: Pulling from invoice-extraction 760c94fc: Pulling fs layer 92f3c37b: Pulling fs layer e5e7f12e: Pulling fs layer 74cc00ca: Pulling fs layer 53113e13: Pulling fs layer 2edc87fb: Pulling fs layer f57a58ef: Pulling fs layer aa495279: Pulling fs layer 63b2d43d: Pulling fs layer d887aa10: Pulling fs layer aa4e079e: Pulling fs layer a698eb87: Pulling fs layer 2ff0c9a2: Pulling fs layer ddc7977b: Pulling fs layer 16202287: Pulling fs layer 52b810a0: Pulling fs layer 753a2d4b: Pulling fs layer b701a083: Pulling fs layer 48fc0cc6: Pulling fs layer 99c2ef74: Pulling fs layer ###Markdown Set ```aws_samples_image``` below to Amazon ECR URI of the serving image you pushed above. ###Code aws_samples_image = "393782509758.dkr.ecr.ap-southeast-1.amazonaws.com/mask-rcnn-tensorpack-serving-sagemaker:tf1.13-tp26664c3" #<amazon-ecr-uri> ###Output _____no_output_____ ###Markdown Select Serving Container ImageAbove, we built and pushed [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) and [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) serving container images to Amazon ECR. Now we are ready to deploy our trained model to an Amazon SageMaker Endpoint using one of the two container images.Next, we set ```serving_image``` to either the `tensorpack_image` or the `aws_samples_image` variable you defined above, making sure that the serving container image we set below matches our trained model. ###Code serving_image = aws_samples_image # set to tensorpack_image or aws_samples_image variable (no string quotes) print(f'serving image: {serving_image}') ###Output _____no_output_____ ###Markdown Create Amazon SageMaker Session Next, we create a SageMaker session. ###Code sagemaker_session = sagemaker.session.Session(boto_session=session) ###Output _____no_output_____ ###Markdown Define Amazon SageMaker ModelNext, we define an Amazon SageMaker Model that defines the deployed model we will serve from an Amazon SageMaker Endpoint. ###Code model_name= 'invoice-extraction-invoice-number-256x256-v4'# Name of the model ###Output _____no_output_____ ###Markdown This model assumes you are using ResNet-50 pre-trained model weights for the ResNet backbone. If this is not true, please adjust `PRETRAINED_MODEL` value below. Please ensure that the `s3_model_url` of your trained model used below is consistent with the container `serving_image` you set above. ###Code s3_model_url = "s3://smart-invoice/mask-rcnn/sagemaker/output/mask-rcnn-s3-2020-04-03-10-45-02-023-optimize-20/output/model.tar.gz"# Trained Model Amazon S3 URI in the format s3://<your path>/model.tar.gz serving_container_def = { 'Image': serving_image, 'ModelDataUrl': s3_model_url, 'Mode': 'SingleModel', 'Environment': { 'SM_MODEL_DIR' : '/opt/ml/model', 'PRETRAINED_MODEL': '/ImageNet-R50-AlignPadding.npz'} } create_model_response = sagemaker_session.create_model(name=model_name, role=role, container_defs=serving_container_def) print(create_model_response) ###Output _____no_output_____ ###Markdown Next, we set the name of the Amaozn SageMaker hosted service endpoint configuration. ###Code endpoint_config_name=f'{model_name}-endpoint-config' print(endpoint_config_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker hosted service endpoint configuration that uses one instance of `ml.p3.2xlarge` to serve the model. ###Code epc = sagemaker_session.create_endpoint_config( name=endpoint_config_name, model_name=model_name, initial_instance_count=1, instance_type='ml.g4dn.xlarge') print(epc) ###Output _____no_output_____ ###Markdown Next we specify the Amazon SageMaker endpoint name for the endpoint used to serve the model. ###Code endpoint_name=f'{model_name}-endpoint' print(endpoint_name) ###Output _____no_output_____ ###Markdown Next, we create the Amazon SageMaker endpoint using the endpoint configuration we created above. ###Code ep=sagemaker_session.create_endpoint(endpoint_name=endpoint_name, config_name=epc) print(ep) ###Output _____no_output_____ ###Markdown Now that the Amazon SageMaker endpoint is in service, we will use the endpoint to do inference for test images. Next, we download [COCO 2017 Test images](http://cocodataset.org/download). ###Code # !wget -O ~/test2017.zip http://images.cocodataset.org/zips/test2017.zip ###Output _____no_output_____ ###Markdown We extract the downloaded COCO 2017 Test images to the home directory. ###Code #!unzip -q -d ~/ ~/test2017.zip #!rm ~/test2017.zip !mkdir -p invoice_dataset_400/files !aws s3 cp "s3://kredx-ai-resources/invoice-annotation-job/files" "invoice_dataset_400/files" --recursive !aws s3 cp "s3://smart-invoice/mask-rcnn/sagemaker/input/invoice_dataset_600" "invoice_dataset_600" --recursive !aws s3 cp "s3://kredx-ai-resources/invoice-annotation-job-2/files" "files" --recursive --exclude "*" --include "*.jpg" ###Output _____no_output_____ ###Markdown Below, we will use the downloaded COCO 2017 Test images to test our deployed Mask R-CNN model. However, in order to visualize the detection results, we need to define some helper functions. Visualization Helper FunctionsNext, we define a helper function to convert COCO Run Length Encoding (RLE) to a binary image mask. The RLE encoding is a dictionary with two keys `counts` and `size`. The `counts` value is a list of counts of run lengths of alternating 0s and 1s for an image binary mask for a specific instance segmentation, with the image is scanned row-wise. The `counts` list starts with a count of 0s. If the binary mask value at `(0,0)` pixel is 1, then the `counts` list starts with a `0`. The `size` value is a list containing image height and width. ###Code import numpy as np def rle_to_binary_mask(rle, img_shape): value = 0 mask_array = [] for count in rle: mask_array.extend([int(value)]*count) value = (value + 1) % 2 assert len(mask_array) == img_shape[0]*img_shape[1] b_mask = np.array(mask_array, dtype=np.uint8).reshape(img_shape) return b_mask ###Output _____no_output_____ ###Markdown Next, we define a helper function for generating random colors for visualizing detection results. ###Code import colorsys import random def random_colors(N, bright=False): brightness = 1.0 if bright else 0.7 hsv = [(i / N, 1, brightness) for i in range(N)] colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv)) random.shuffle(colors) return colors ###Output _____no_output_____ ###Markdown Next, we define a helper function to apply an image binary mask for an instance segmentation to the image. Each image binary mask is of the size of the image. ###Code def apply_mask(image, mask, color, alpha=0.5): a_mask = np.stack([mask]*3, axis=2).astype(np.int8) for c in range(3): image[:, :, c] = np.where(mask == 1, image[:, :, c] *(1 - alpha) + alpha * color[c]*255,image[:, :, c]) return image ###Output _____no_output_____ ###Markdown Next, we define a helper function to show the applied detection results. ###Code import matplotlib.pyplot as plt from matplotlib import patches def show_detection_results(img=None, annotations=None): """ img: image numpy array annotations: annotations array for image where each annotation is in COCO format """ num_annotations = len(annotations) colors = random_colors(num_annotations) fig,ax = plt.subplots(figsize=(img.shape[1]//50, img.shape[0]//50)) for i, a in enumerate(annotations): segm = a['segmentation'] img_shape = tuple(segm['size']) rle = segm['counts'] binary_image_mask = rle_to_binary_mask(rle, img_shape) bbox = a['bbox'] category_id = a['category_id'] category_name = a['category_name'] # select color from random colors color = colors[i] # Show bounding box bbox_x, bbox_y, bbox_w, bbox_h = bbox box_patch = patches.Rectangle((bbox_x, bbox_y), bbox_w, bbox_h, linewidth=1, alpha=0.8, linestyle="dashed", edgecolor=color, facecolor='none') ax.add_patch(box_patch) label = f'{category_name}:{category_id}' ax.text(bbox_x, bbox_y + 8, label, color='red', size=11, backgroundcolor="none") # Show mask img = apply_mask(img, binary_image_mask.astype(np.bool), color) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Visualize Detection ResultsNext, we select a random image from COCO 2017 Test image dataset. After you are done visualizing the detection results for this image, you can come back to the cell below and select your next random image to test. ###Code import os import random test2017_dir="invoice_dataset_600/val2017" files2017_dir="files" img_id=random.choice(os.listdir(test2017_dir)) img_local_path = os.path.join(test2017_dir,img_id) file_img_local_path = os.path.join(files2017_dir,img_id.replace("png","jpg")) print(img_local_path) print(file_img_local_path) ###Output _____no_output_____ ###Markdown Next, we read the image and convert it from BGR color to RGB color format. ###Code import cv2 img=cv2.imread(img_local_path, cv2.IMREAD_COLOR) print(img.shape) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) ###Output _____no_output_____ ###Markdown Next, we show the image that we randomly selected. ###Code fig,ax = plt.subplots(figsize=(img.shape[1]//50, img.shape[0]//50)) ax.imshow(img.astype(int)) plt.show() ###Output _____no_output_____ ###Markdown Next, we invoke the Amazon SageMaker Endpoint to detect objects in the test image that we randomly selected.This REST API endpoint only accepts HTTP POST requests with `ContentType` set to `application/json`. The content of the POST request must conform to following JSON schema:`{ "img_id": "YourImageId", "img_data": "Base64 encoded image file content, encoded as utf-8 string" }`The response of the POST request conforms to following JSON schema:`{ "annotations": [ { "bbox": [X, Y, width, height], "category_id": "class id", "category_name": "class name", "segmentation": { "counts": [ run-length-encoding, ], "size": [height, width]} }, ] }` ###Code import boto3 import base64 import json client = boto3.client('sagemaker-runtime') with open(img_local_path, "rb") as image_file: img_data = base64.b64encode(image_file.read()) data = {"img_id": img_id} data["img_data"] = img_data.decode('utf-8') body=json.dumps(data).encode('utf-8') response = client.invoke_endpoint(EndpointName=endpoint_name, ContentType="application/json", Accept="application/json", Body=body) body=response['Body'].read() msg=body.decode('utf-8') data=json.loads(msg) assert data is not None annotations = data['annotations'] print(annotations) ###Output _____no_output_____ ###Markdown The response from the endpoint includes annotations for the detected objects in COCO annotations format. Next, we aplly all the detection results to the image. ###Code show_detection_results(img, annotations) import time images = os.listdir(test2017_dir) for img_id in images: img_local_path = os.path.join(test2017_dir,img_id) img=cv2.imread(img_local_path, cv2.IMREAD_COLOR) try: print(img.shape) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = cv2.resize((800,800), img) cv2.imwrite(img_local_path, img) with open(img_local_path, "rb") as image_file: img_data = base64.b64encode(image_file.read()) data = {"img_id": img_id} data["img_data"] = img_data.decode('utf-8') body=json.dumps(data).encode('utf-8') response = client.invoke_endpoint(EndpointName=endpoint_name, ContentType="application/json", Accept="application/json", Body=body) body=response['Body'].read() msg=body.decode('utf-8') data=json.loads(msg) if data: annotations = data['annotations'] print(annotations) show_detection_results(img, annotations) except: pass ###Output _____no_output_____ ###Markdown Delete SageMaker Endpoint, Endpoint Config and ModelIf you are done testing, delete the deployed Amazon SageMaker endpoint, endpoint config, and the model below. The trained model in S3. bucket is not deleted. If you are not done testing, go back to the section Visualize Detection Results and select another test image. ###Code sagemaker_session.delete_endpoint(endpoint_name=endpoint_name) sagemaker_session.delete_endpoint_config(endpoint_config_name=endpoint_config_name) sagemaker_session.delete_model(model_name=model_name) !docker pull "393782509758.dkr.ecr.ap-southeast-1.amazonaws.com/invoice-extraction:tensorflow-base-1.13.1-gpu-py36-ubuntu-16.04" ###Output _____no_output_____
Model_build_notebook.ipynb
###Markdown Data Acquisition ###Code import pandas as pd import matplotlib.pyplot as plt import seaborn as sns data = pd.read_csv("/content/healthcare-dataset-stroke-data.csv") data.head() ###Output _____no_output_____ ###Markdown Data Analysis ###Code data.shape data.info data.isnull().sum() ###Output _____no_output_____ ###Markdown Only 'bmi' column holds some missing values. Hence, exploring the column more for insights. ###Code data['bmi'].value_counts() data['bmi'].describe() ###Output _____no_output_____ ###Markdown Data Preprocessing Handling missing data ###Code # filling null values in 'bmi' column in data data['bmi'].fillna(data['bmi'].mean(), inplace = True) ###Output _____no_output_____ ###Markdown Feature Selection ###Code # dropping unnecessary columns data.drop('id', axis = 1, inplace = True) data.drop('ever_married', axis = 1, inplace = True) data.shape ###Output _____no_output_____ ###Markdown Handling outliers ###Code # checking for outliers fig, ax = plt.subplots(figsize = (8, 8)) sns.boxplot(data = data, palette = 'Set2') plt.show(); print(data['avg_glucose_level'].describe()) print(" ") print('Count of outliers in avg_glucose_level = ', data[data['avg_glucose_level'] > 114].shape[0] ) data['bmi'].describe() quan = data['avg_glucose_level'].quantile(0.78) quan2 = data['bmi'].quantile(0.98) print("Quantile limit for avg_glucose_level = ", quan) print("Quantile limit for bmi = ", quan2) # removing outliers filtered_data = data[data['avg_glucose_level'] < quan] filtered_data = filtered_data[filtered_data['bmi'] < quan2] filtered_data.shape data.shape ###Output _____no_output_____ ###Markdown Encoding ###Code # performing encoding (label encoding) from sklearn.preprocessing import LabelEncoder encoder = LabelEncoder() gender = encoder.fit_transform( filtered_data['gender'] ) smoking_status = encoder.fit_transform( filtered_data['smoking_status'] ) work_type = encoder.fit_transform( filtered_data['work_type'] ) Residence_type = encoder.fit_transform( filtered_data['Residence_type'] ) filtered_data['work_type'] = work_type filtered_data['Residence_type'] = Residence_type filtered_data['smoking_status'] = smoking_status filtered_data['gender'] = gender filtered_data.info() filtered_data.head() ###Output _____no_output_____ ###Markdown Model Building ###Code x = filtered_data.drop('stroke', axis = 1) y = filtered_data['stroke'] # splitting the dataset for train and test from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 2) x_test.shape x_train.shape filtered_data.describe() # Scaling the train and test features from sklearn.preprocessing import StandardScaler std = StandardScaler() x_train_std = std.fit_transform(x_train) x_test_std = std.transform(x_test) # saving scalar objects in a pickle file import pickle pickle.dump(std, open('scalar.pkl', 'wb')) ###Output _____no_output_____ ###Markdown Model Training Checking different algorithms for training to get the best accuracy Decision Tree ###Code from sklearn.tree import DecisionTreeClassifier dt = DecisionTreeClassifier() dt.fit(x_train_std, y_train) dt.feature_importances_ x_train.columns ###Output _____no_output_____ ###Markdown Clearly, age, average glucose level and bmi are the most importatnt features for decision tree classifier. ###Code y_pred_dt = dt.predict(x_test_std) from sklearn.metrics import accuracy_score ac_dt = accuracy_score(y_test, y_pred_dt) print("Accuracy using decison tree classification algorithm = " + str(ac_dt*100) + " %") ###Output Accuracy using decison tree classification algorithm = 92.08173690932313 % ###Markdown Logistic Regression ###Code from sklearn.linear_model import LogisticRegression lr = LogisticRegression() lr.fit(x_train_std, y_train) y_pred_lr = lr.predict(x_test_std) ac_lr = accuracy_score(y_test, y_pred_lr) print("Accuracy using logistic regression algorithm = " + str(ac_lr*100) + " %") ###Output Accuracy using logistic regression algorithm = 94.89144316730524 % ###Markdown K- Nearest Neighbours ###Code from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier() knn.fit(x_train_std, y_train) y_pred_knn = knn.predict(x_test_std) ac_knn = accuracy_score(y_test, y_pred_knn) print("Accuracy using k nearest neighbours algorithm = " + str(ac_knn*100) + " %") ###Output Accuracy using k nearest neighbours algorithm = 94.6360153256705 % ###Markdown Random Forest Classifier ###Code from sklearn.ensemble import RandomForestClassifier rf = RandomForestClassifier() rf.fit(x_train_std, y_train) y_pred_rf = rf.predict(x_test_std) ac_rf = accuracy_score(y_test, y_pred_rf) print("Accuracy using random forest classification algorithm = " + str(ac_rf*100) + " %") ###Output Accuracy using random forest classification algorithm = 94.76372924648787 % ###Markdown Support Vector Machines ###Code from sklearn.svm import SVC svc = SVC() svc.fit(x_train_std, y_train) y_pred_svc = svc.predict(x_test_std) ac_svc = accuracy_score(y_test, y_pred_svc) print("Accuracy using support vector classification algorithm = " + str(ac_svc*100) + " %") ac_svc == ac_lr ###Output _____no_output_____ ###Markdown Hence, we can conclude equal and maximum accuracy using logistic regression and svc. Saving the model ###Code import pickle pickle.dump(lr, open(r'finalized_model.pkl', 'wb')) pickle.dump(rf, open(r'finalized_model_rf.pkl', 'wb')) ###Output _____no_output_____
Notebooks/poc_wikipedia_nlp_LSTM.ipynb
###Markdown Amostra e teste utilizando LSTM**Modelo retirado e adaptado dos exemplos criados para tensorflow** ###Code import pickle import math import pandas as pd import numpy as np from numpy import array from sklearn.feature_extraction.text import CountVectorizer from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.utils import to_categorical from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import LSTM from tensorflow.keras.layers import Embedding from tensorflow.keras.models import load_model from tensorflow.keras.callbacks import ModelCheckpoint from keras.callbacks import EarlyStopping from pickle import load import keras keras.__version__ ###Output _____no_output_____ ###Markdown Text generation with LSTMThis notebook contains the code samples found in Chapter 8, Section 1 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----[...] Implementing character-level LSTM text generationLet's put these ideas in practice in a Keras implementation. The first thing we need is a lot of text data that we can use to learn a language model. You could use any sufficiently large text file or set of text files -- Wikipedia, the Lord of the Rings, etc. In this example we will use some of the writings of Nietzsche, the late-19th century German philosopher (translated to English). The language model we will learn will thus be specifically a model of Nietzsche's writing style and topics of choice, rather than a more generic model of the English language. Preparando os dados sem utilizar Tokenizer **Tive que reduzir o tamanho do dataset por problemas com memoria, o modelo final deve ser treinado em um servidor com mais disponibilidade** ###Code import keras import numpy as np import json f = open('wikipedia-content-dataset.json',) data = json.load(f) content = list(data[x] for x in data.keys()) text = '' for c in content[0:200]: for i in c: text += i print('Corpus length:', len(text)) text[100:1000] ###Output _____no_output_____ ###Markdown Next, we will extract partially-overlapping sequences of length `maxlen`, one-hot encode them and pack them in a 3D Numpy array `x` of shape `(sequences, maxlen, unique_characters)`. Simultaneously, we prepare a array `y` containing the corresponding targets: the one-hot encoded characters that come right after each extracted sequence. ###Code # Length of extracted character sequences maxlen = 60 # We sample a new sequence every `step` characters step = 3 # This holds our extracted sequences sentences = [] # This holds the targets (the follow-up characters) next_chars = [] for i in range(0, len(text) - maxlen, step): sentences.append(text[i: i + maxlen]) next_chars.append(text[i + maxlen]) print('Number of sequences:', len(sentences)) # List of unique characters in the corpus chars = sorted(list(set(text))) print('Unique characters:', len(chars)) # Dictionary mapping unique characters to their index in `chars` char_indices = dict((char, chars.index(char)) for char in chars) # Next, one-hot encode the characters into binary arrays. print('Vectorization...') x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): x[i, t, char_indices[char]] = 1 y[i, char_indices[next_chars[i]]] = 1 print(x.shape) print(y.shape) y ###Output _____no_output_____ ###Markdown Building the networkOur network is a single `LSTM` layer followed by a `Dense` classifier and softmax over all possible characters. But let us note that recurrent neural networks are not the only way to do sequence data generation; 1D convnets also have proven extremely successful at it in recent times. ###Code from keras import layers model = keras.models.Sequential() model.add(layers.LSTM(128, input_shape=(maxlen, len(chars)))) model.add(layers.Dense(len(chars), activation='softmax')) ###Output _____no_output_____ ###Markdown Since our targets are one-hot encoded, we will use `categorical_crossentropy` as the loss to train the model: ###Code optimizer = keras.optimizers.RMSprop(lr=0.01) model.compile(loss='categorical_crossentropy', optimizer=optimizer) ###Output _____no_output_____ ###Markdown Training the language model and sampling from itGiven a trained model and a seed text snippet, we generate new text by repeatedly:* 1) Drawing from the model a probability distribution over the next character given the text available so far* 2) Reweighting the distribution to a certain "temperature"* 3) Sampling the next character at random according to the reweighted distribution* 4) Adding the new character at the end of the available textThis is the code we use to reweight the original probability distribution coming out of the model, and draw a character index from it (the "sampling function"): ###Code def sample(preds, temperature=1.0): preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) ###Output _____no_output_____ ###Markdown Finally, this is the loop where we repeatedly train and generated text. We start generating text using a range of different temperatures after every epoch. This allows us to see how the generated text evolves as the model starts converging, as well as the impact of temperature in the sampling strategy. Acompanhamento do treinamento ###Code import random import sys for epoch in range(1, 100): print('epoch', epoch) # Fit the model for 1 epoch on the available training data model.fit(x, y, batch_size=128, epochs=1) # Select a text seed at random start_index = random.randint(0, len(text) - maxlen - 1) generated_text = text[start_index: start_index + maxlen] print('--- Generating with seed: "' + generated_text + '"') for temperature in [0.2, 0.5, 1.0, 1.2]: print('------ temperature:', temperature) sys.stdout.write(generated_text) # We generate 400 characters for i in range(400): sampled = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(generated_text): sampled[0, t, char_indices[char]] = 1. preds = model.predict(sampled, verbose=0)[0] next_index = sample(preds, temperature) next_char = chars[next_index] generated_text += next_char generated_text = generated_text[1:] sys.stdout.write(next_char) sys.stdout.flush() print() ###Output _____no_output_____ ###Markdown Outras arquiteturas de treinamento e tratamento Preparando os dados utilizando tokenizer ###Code import keras import numpy as np import json f = open('wikipedia-content-dataset.json',) data = json.load(f) content = list(data[x] for x in data.keys()) text = '' for c in content[0:200]: for i in c: text += i print('Corpus length:', len(text)) text[100:1000] from tensorflow.keras.preprocessing.text import Tokenizer max_words = 500000 tokenizer = Tokenizer(num_words=max_words) tokenizer.fit_on_texts(content[0:300]) # Para tokenização, usamos a lista de listas sequences = tokenizer.texts_to_sequences(content[0:300]) # Para tokenização, usamos a lista de listas print(sequences[:5]) vocab_size = len(tokenizer.word_index) print('Tamanho do vocabulario: ', vocab_size) # Apresenta um vocabulario maior que a versão codada na mão print('Numero de sequencias: ', len(sequences)) sentence_len = 60 pred_len = 3 train_len = sentence_len - pred_len seq = [] for i in range(len(text)-sentence_len): seq.append(text[i:i+sentence_len]) reverse_word_map = dict(map(reversed, tokenizer.word_index.items())) trainX = [] trainy = [] for i in seq: trainX.append(i[:train_len]) trainy.append(i[-1]) trainX[:5] trainy[:5] np.asarray(trainX) pd.get_dummies(np.asarray(trainy)) np.asarray(trainX).shape pd.get_dummies(np.asarray(trainy)).shape ###Output _____no_output_____ ###Markdown Diferentes arquiteturas de modelo possiveis Versão 11. Embedding layer - Helps model understand 'meaning' of words by mapping them to representative vector space instead of semantic integers2. Stacked LSTM layers - Stacked LSTMs add more depth than additional cells in a single LSTM layer (see paper: https://arxiv.org/abs/1303.5778) - The first LSTM layer must have `return sequences` flag set to True in order to pass sequence information to the second LSTM layer instead of just its end states3. Dense (regression) layer with ReLU activation4. Dense layer with Softmax activation - Outputs word probability across entire vocab ###Code model = Sequential([ Embedding(vocab_size+1, 50, input_length=train_len), LSTM(100, return_sequences=True), LSTM(100), Dense(100, activation='relu'), Dense(vocab_size, activation='softmax') ]) model.summary() ###Output Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, 57, 50) 29600 _________________________________________________________________ lstm (LSTM) (None, 57, 100) 60400 _________________________________________________________________ lstm_1 (LSTM) (None, 100) 80400 _________________________________________________________________ dense (Dense) (None, 100) 10100 _________________________________________________________________ dense_1 (Dense) (None, 591) 59691 ================================================================= Total params: 240,191 Trainable params: 240,191 Non-trainable params: 0 _________________________________________________________________ ###Markdown Versão 2This model is similar to model 1, but we add a dropout layer to prevent overfitting. The dropout layer randomly turns off a proportion of neurons fed into it from the previous layer, forcing the model to come up with more robust features ###Code model = Sequential([ Embedding(vocab_size+1, 50, input_length=train_len), LSTM(100, return_sequences=True), LSTM(100), Dense(100, activation='relu'), Dropout(0.1), Dense(vocab_size, activation='softmax') ]) model.summary() ###Output _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_2 (Embedding) (None, 19, 50) 785700 _________________________________________________________________ lstm_4 (LSTM) (None, 19, 100) 60400 _________________________________________________________________ lstm_5 (LSTM) (None, 100) 80400 _________________________________________________________________ dense_4 (Dense) (None, 100) 10100 _________________________________________________________________ dropout (Dropout) (None, 100) 0 _________________________________________________________________ dense_5 (Dense) (None, 15713) 1587013 ================================================================= Total params: 2,523,613 Trainable params: 2,523,613 Non-trainable params: 0 _________________________________________________________________ ###Markdown Model 3 Model 2 had an additional dropout layer, but the accuracy took a 30% hit.For model 3, we'll try removing the dropout layer and up the number of neurons across all layers by 50%. As expected, this resulted in a higher accuracy on the training set of about 40%. ###Code model = Sequential([ Embedding(vocab_size+1, 50, input_length=train_len), LSTM(150, return_sequences=True), LSTM(150), Dense(150, activation='relu'), Dense(vocab_size, activation='softmax') ]) model.summary() ###Output Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_1 (Embedding) (None, 57, 50) 29600 _________________________________________________________________ lstm_2 (LSTM) (None, 57, 150) 120600 _________________________________________________________________ lstm_3 (LSTM) (None, 150) 180600 _________________________________________________________________ dense_2 (Dense) (None, 150) 22650 _________________________________________________________________ dense_3 (Dense) (None, 591) 89241 ================================================================= Total params: 442,691 Trainable params: 442,691 Non-trainable params: 0 _________________________________________________________________ ###Markdown Compile e fit ###Code model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(np.asarray(trainX), pd.get_dummies(np.asarray(trainy)), batch_size=128, epochs=100) ###Output Epoch 1/100 WARNING:tensorflow:Model was constructed with shape (None, 57) for input Tensor("embedding_input:0", shape=(None, 57), dtype=float32), but it was called on an input with incompatible shape (None, 1). ###Markdown Acompanhamento do treinamento ###Code import random import sys for epoch in range(1, 100): print('epoch', epoch) model.fit(np.asarray(trainX), pd.get_dummies(np.asarray(trainy)), batch_size=128, epochs=1) start_index = random.randint(0, len(text) - maxlen - 1) generated_text = text[start_index: start_index + maxlen] print('--- Generating with seed: "' + generated_text + '"') for temperature in [0.2, 0.5, 1.0, 1.2]: print('------ temperature:', temperature) sys.stdout.write(generated_text) for i in range(400): sampled = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(generated_text): sampled[0, t, char_indices[char]] = 1. preds = model.predict(sampled, verbose=0)[0] next_index = sample(preds, temperature) next_char = chars[next_index] generated_text += next_char generated_text = generated_text[1:] sys.stdout.write(next_char) sys.stdout.flush() print() ###Output _____no_output_____ ###Markdown Limpando o texto ###Code import keras import numpy as np import json from tensorflow.keras.preprocessing.text import Tokenizer from nltk.tokenize import word_tokenize import string import nltk nltk.download('punkt') nltk.download('stopwords') f = open('wikipedia-content-dataset.json',) data = json.load(f) content = list(data[x] for x in data.keys()) text = '' for c in content[0:200]: for i in c: text += i print('Corpus length:', len(text)) print('Original sample before cleaning \n') print(text[:500]) tokens = word_tokenize(text) # convert to lower case tokens = [w.lower() for w in tokens] # remove punctuation from each word table = str.maketrans('', '', string.punctuation) stripped = [w.translate(table) for w in tokens] # remove remaining tokens that are not alphabetic words = [word for word in stripped if word.isalpha()] # filter out stop words from nltk.corpus import stopwords stop_words = set(stopwords.words('english')) words = [w for w in words if not w in stop_words] text = '' for c in words: text += c text += ' ' print('Original sample after cleaning \n') print(text[:500]) ###Output Original sample after cleaning mcmxxxix common year starting sunday gregorian calendar year common era ce anno domini ad designations year millennium year century last year decade year also marks start second world war largest deadliest conflict human history events world war ii wwii prefix information january january efe news agency based madrid spain officially founded limited company january pioneering us aviator amelia earhart officially declared dead eighteen months disappearance january naturwissenschaften publishes evi ###Markdown Definição do modelo a ser utilizado ###Code import pickle import math import pandas as pd import numpy as np from numpy import array from pickle import load import string import json from sklearn.feature_extraction.text import CountVectorizer from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.utils import to_categorical from tensorflow.keras.preprocessing.sequence import pad_sequences from nltk.tokenize import word_tokenize from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import LSTM from tensorflow.keras.layers import Embedding from tensorflow.keras.models import load_model from tensorflow.keras.callbacks import ModelCheckpoint from keras.callbacks import EarlyStopping import nltk nltk.download('punkt') nltk.download('stopwords') ###Output [nltk_data] Downloading package punkt to /root/nltk_data... [nltk_data] Package punkt is already up-to-date! [nltk_data] Downloading package stopwords to /root/nltk_data... [nltk_data] Package stopwords is already up-to-date! ###Markdown Preparando os dados ###Code f = open('wikipedia-content-dataset.json',) data = json.load(f) content = list(data[x] for x in data.keys()) text = '' for c in content[0:200]: for i in c: text += i print('Corpus length:', len(text)) tokens = word_tokenize(text) # convert to lower case tokens = [w.lower() for w in tokens] # remove punctuation from each word table = str.maketrans('', '', string.punctuation) stripped = [w.translate(table) for w in tokens] # remove remaining tokens that are not alphabetic words = [word for word in stripped if word.isalpha()] # filter out stop words from nltk.corpus import stopwords stop_words = set(stopwords.words('english')) words = [w for w in words if not w in stop_words] text = '' for c in words: text += c text += ' ' text = text.strip() maxlen = 60 step = 3 sentences = [] next_chars = [] for i in range(0, len(text) - maxlen, step): sentences.append(text[i: i + maxlen]) next_chars.append(text[i + maxlen]) print('Numero de sequencias:', len(sentences)) chars = sorted(list(set(text))) print('Caracteres unicos:', len(chars)) char_indices = dict((char, chars.index(char)) for char in chars) print('Vetorizando o texto') x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): x[i, t, char_indices[char]] = 1 y[i, char_indices[next_chars[i]]] = 1 x.shape y.shape ###Output _____no_output_____ ###Markdown Definição do modelo ###Code model = Sequential([ LSTM(len(chars), return_sequences=True, input_shape=(maxlen, len(chars))), LSTM(len(chars), return_sequences=True), LSTM(len(chars)), Dense(len(chars), activation='relu'), Dense(len(chars), activation='softmax') ]) model.summary() model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Treina o modelo ###Code import random import sys def sample(preds, temperature=1.0): preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) for epoch in range(1, 100): print('epoch', epoch) model.fit(x, y, batch_size=128, epochs=1) start_index = random.randint(0, len(text) - maxlen - 1) generated_text = text[start_index: start_index + maxlen] print('--- Gerando exemplo com a seed: "' + generated_text + '"') for temperature in [0.2, 0.5, 1.0, 1.2]: print('------ temperature:', temperature) sys.stdout.write(generated_text) for i in range(400): sampled = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(generated_text): sampled[0, t, char_indices[char]] = 1. preds = model.predict(sampled, verbose=0)[0] next_index = sample(preds, temperature) next_char = chars[next_index] generated_text += next_char generated_text = generated_text[1:] sys.stdout.write(next_char) sys.stdout.flush() print() f = open('drive/My Drive/wikipedia-nlp/wikipedia-content-dataset.json',) data = json.load(f) len(data.keys()) topics_file = open('topics.txt', 'w') for topics in data.keys(): topics_file.write(topics + '\n') topics_file.close() ###Output _____no_output_____ ###Markdown 2ª Geração do algoritmo ###Code from google.colab import drive drive.mount('/content/drive') !ls 'drive/My Drive/wikipedia-nlp' !pip uninstall tensorflow !pip install tensorflow-gpu import math import numpy as np import string import json import os import sys import logging import nltk import random from nltk.corpus import stopwords from nltk.tokenize import word_tokenize import tensorflow as tf from tensorflow.keras import backend as K from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import LSTM from tensorflow.keras.layers import Embedding nltk.download('punkt') nltk.download('stopwords') chars = '' maxlen = 60 tf.keras.__version__ tf.__version__ # Fix seed's os.environ['PYTHONHASHSEED']=str(66) tf.random.set_seed(66) np.random.seed(66) random.seed(66) ###Output _____no_output_____ ###Markdown GPU tensorflowEste trecho de código tem como objetivo definir o tensorflow para rodar em GPU, precisa ter instalado as seguintes dependencias:* tensorflow-gpu* CUDA driver ou equivalente AMD* Driver da placa de video(Se rodando em maquina local)Neste exemplo está dando problema, precisa ser arrumado ###Code def prepareData(dataFile, parts): f = open(dataFile,) data = json.load(f) content = list(data[x] for x in data.keys()) text = '' for c in content[:parts]: for i in c: text += i logging.info(f'Corpus length: {len(text)}') tokens = word_tokenize(text) tokens = [w.lower() for w in tokens] table = str.maketrans('', '', string.punctuation) stripped = [w.translate(table) for w in tokens] words = [word for word in stripped if word.isalpha()] stop_words = set(stopwords.words('english')) words = [w for w in words if not w in stop_words] text = '' for c in words: text += c text += ' ' text = text.strip() logging.info(f'Finished to load file') return text text = prepareData('drive/My Drive/wikipedia-nlp/wikipedia-content-dataset.json', 100) chars = '' maxlen = 60 step = 3 sentences = [] next_chars = [] for i in range(0, len(text) - maxlen, step): sentences.append(text[i: i + maxlen]) next_chars.append(text[i + maxlen]) print('Numero de sequencias:', len(sentences)) chars = sorted(list(set(text))) print('Caracteres unicos:', len(chars)) char_indices = dict((char, chars.index(char)) for char in chars) print('Vetorizando o texto') x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): x[i, t, char_indices[char]] = 1 y[i, char_indices[next_chars[i]]] = 1 print('Finalizado vetorização do texto') model = Sequential([ LSTM(len(chars), return_sequences=True, input_shape=(maxlen, len(chars))), LSTM(len(chars), return_sequences=True), LSTM(len(chars)), Dense(len(chars), activation='relu'), Dense(len(chars), activation='softmax') ]) model.summary() model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(x, y, batch_size=128, epochs=100) model.save('wikipedia-nlp-100-parts.h5') def sample(preds, temperature=1.0): preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) start_index = random.randint(0, len(text) - maxlen - 1) generated_text = text[start_index: start_index + maxlen] print('--- Generating with seed: "' + generated_text + '"') for temperature in [0.2, 0.5, 1.0, 1.2]: print('------ temperature:', temperature) sys.stdout.write(generated_text) # We generate 400 characters for i in range(400): sampled = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(generated_text): sampled[0, t, char_indices[char]] = 1. preds = model.predict(sampled, verbose=0)[0] next_index = sample(preds, temperature) next_char = chars[next_index] generated_text += next_char generated_text = generated_text[1:] sys.stdout.write(next_char) sys.stdout.flush() print() def generate_answer(seed, temperature, awnser_size=500): answer = '' generated_text = seed for i in range(awnser_size): sampled = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(generated_text): sampled[0, t, char_indices[char]] = 1. preds = model.predict(sampled, verbose=0)[0] next_index = sample(preds, temperature) next_char = chars[next_index] generated_text += next_char generated_text = generated_text[1:] return generated_text seed = 'computer industry presented interview series bbc radio live' w = generate_answer(seed, 1.2) print(w) ###Output /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:3: RuntimeWarning: divide by zero encountered in log This is separate from the ipykernel package so we can avoid doing imports until
homeworks/tarea_01/work.ipynb
###Markdown Tarea N°01 Instrucciones1.- Completa tus datos personales (nombre y rol USM) en siguiente celda. * __Nombre__: Maximiliano Ramírez Núñez* __Rol__: 201710507-0 2.- Debes _pushear_ este archivo con tus cambios a tu repositorio personal del curso, incluyendo datos, imágenes, scriptsm, etc.3.- Se evaluará: - Soluciones - Código - Que Binder esté bien configurado. - Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. I.- ImagenceptionDesde [Wikipedia](https://es.wikipedia.org/wiki/RGB), __RGB__ (sigla en inglés de red, green, blue) es un modelo de color basado en la síntesis aditiva, con el que es posible representar un color mediante la mezcla por adición de los tres colores de luz primarios. El modelo de color RGB no define por sí mismo lo que significa exactamente rojo, verde o azul, por lo que los mismos valores RGB pueden mostrar colores notablemente diferentes en distintos dispositivos que usen este modelo de color. Aunque utilicen un mismo modelo de color, sus espacios de color pueden variar considerablemente.Para indicar con qué proporción es mezclado cada color, se asigna un valor a cada uno de los colores primarios, de manera que el valor "0" significa que no interviene en la mezcla y, a medida que ese valor aumenta, se entiende que aporta más intensidad a la mezcla. Aunque el intervalo de valores podría ser cualquiera (valores reales entre 0 y 1, valores enteros entre 0 y 37, etc.), es frecuente que cada color primario se codifique con un byte (8 bits).Así, de manera usual, la intensidad de cada una de las componentes se mide según una escala que va del 0 al 255 y cada color es definido por un conjunto de valores escritos entre paréntesis (correspondientes a valores "R", "G" y "B") y separados por comas.El conjunto de todos los colores también se puede representar en forma de cubo. Cada color es un punto de la superficie o del interior de éste. La escala de grises estaría situada en la diagonal que une al color blanco con el negro.![rgb](https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Avl3119color4a.svg/800px-Avl3119color4a.svg.png) Para efectos prácticos del curso, es posible representar cada pixel de una imagen con un array de 3 dimensiones, cada valor representa a una de las capas RGB. Por lo tanto, una imagen de $n \times m$ pixeles se representa como un arreglo de dimension $(n, m , 3)$ En `Python` una de las librerías de procesamiento de imágenes más utilizada es `Pillow`. Abrir una imagen es tan fácil como: ###Code # librerias import os import numpy as np from PIL import Image gatito = Image.open(os.path.join("images","gatito.png")) ###Output _____no_output_____ ###Markdown Notar que la variable anterior es de una clase específica de la librería. ###Code type(gatito) ###Output _____no_output_____ ###Markdown Para ver la imagen en Jupyter puedes utilizar la misma técnica que con los `pd.DataFrames`, es decir: ###Code gatito ###Output _____no_output_____ ###Markdown Para tener su representación en un array podemos utilizar el constructor `np.array` con argumento la imagen. ###Code gatito_np = np.array(gatito) print(f"Dimension de la imagen gatito: {gatito_np.shape}.\n") print(f"Al convertir a np.ndarry el tipo de elementos es {gatito_np.dtype}.\n") gatito_np ###Output _____no_output_____ ###Markdown 1.- Encontrando la imagen ocultaLa imagen anterior tiene una imagen oculta, el ejercicio corresponde en descifrarlo. Las instrucciones son las siguientes: 1.1 Crear una lista vacía declarada como `secret_list`. ###Code secret_list = []## FIX ME ## ###Output _____no_output_____ ###Markdown 1.2 Iterar por cada uno de los canales RGB (`gatito_np.shape[2]`) y en cada iteración: * Crear un arreglo temporal llamado `secret_aux` de dos dimensiones, de la misma dimension de pixeles de la imagen `gatito` y que tenga valores enteros, `0` si el valor de la capa de `gatito_np` es par y `1` si es impar. - No iterar por filas y columnas. - Utilizar la operación módulo `%`. - En la i-ésima iteración de los canales la capa de `gatito_np` es `gatito_np[:, :, i]`. * Escalar `secret_aux` a valores 0 y 255. * Cambiar el `dtype` de `secret_aux` a `np.uint8` (utilize el méteodo `astype()`). * Agregue `secret_aux` a `secret_list`. Al final de la iteración `secret_list` debe tener solo tres arreglos.**Observación:** recuerde que puede aplicar operaciones directo a un arreglo de numpy. ###Code for i in range(3): secret_aux=(gatito_np[:,:,i]%2)*250 secret_aux.astype(np.uint8) secret_list.append(secret_aux) secret_list print(f"secret_list tiene {len(secret_list)} elementos") ###Output _____no_output_____ ###Markdown 1.3 Crear la variable `secret_np` concatenando horizontalmente los elementos de `secret_list`. ###Code secret_np = np.concatenate((secret_list[0],secret_list[1],secret_list[2]),axis=1) secret_np ###Output _____no_output_____ ###Markdown 1.4 Crear el objeto `secret_img` utilizando el arreglo `secret_np`, asegurar que los valores estén entre 0 y 255, y que el dtype sea `np.uint8`, con el método `Image.fromarray` con argumento `mode="L"` ###Code np.unique(secret_np) secret_np.dtype secret_img = Image.fromarray(secret_np, mode='L') ###Output _____no_output_____ ###Markdown Ahora puedes ver el resultado! ###Code secret_img ###Output _____no_output_____ ###Markdown 2.- Escondiendo una nueva imagenEs tu turno, ahora tu esconderás una imagen. Las instrucciones son las siguientes: 2.1 Selecciona una imagen de 2160 x 3840 pixeles (a.k.a resolución 4k), lo importante es que sea solo en blanco y negro, en la carpeta `images` se disponibiliza como ejemplo la imagen `black_and_white_example.jpg` y crea una variable llamada `my_img` leyendo la imagen seleccionada con `Image.open()`. ###Code my_img = Image.open(os.path.join("images","black_and_white_example.jpg")) my_img ###Output _____no_output_____ ###Markdown 2.2 Crea un arreglo llamado `my_img_np` utilizando `my_img` y el método `np.array()`. * Es importante que `my_img_np.shape` sea `(2160, 3840)`, es decir, que solo sea de dos dimensiones. Esto porque es una imagen en blanco y negro, no necesitando el modelo RGB. ###Code my_img_np = np.array(my_img) my_img_np.shape ###Output _____no_output_____ ###Markdown 2.3 Crear la variable `my_img_np_aux` utilizando un _umbral_ con tal de que: - 1: Si el valor del pixel es mayor al _umbral_. - 0: Si el valor del pixel es menor o igual al _umbral_. - El `dtype` debe ser `np.uint8`. - Para `black_and_white_example.jpg` un umbral adecuado es `20`. ###Code umbral = 20 my_img_np_aux = np.zeros(my_img_np.shape) for i in range(my_img_np.shape[0]): np.place(my_img_np[i], my_img_np[i] <= umbral , 0 ) np.place(my_img_np[i], my_img_np[i] > umbral , 1 ) my_img_np_aux[i] = my_img_np[i] my_img_np_aux ###Output _____no_output_____ ###Markdown Puedes probar que tan bien quedó la imagen con la siguiente linea. Si crees que no se ve bien, puedes cambiar el _umbral_. ###Code Image.fromarray(my_img_np_aux * 255, mode='1') ###Output _____no_output_____ ###Markdown 2.4 Dividir la imagen en tres arreglos de tamaño (2160, 1280) y guardarlos en una lista con el nombre `my_img_split`. Hint: Revisa en la documentación de `numpy`. ###Code my_img_split = np.hsplit(my_img_np_aux, 3) ###Output _____no_output_____ ###Markdown Revisa utilizando la siguiente iteración. ###Code for img_array in my_img_split: print(img_array.shape) ###Output _____no_output_____ ###Markdown 2.5 La imagen donde se esconderá tu imagen selecionada está en la carpeta `images` con el nombre `gatito_original.png`, que sospechosamente es de 2160 x 1280 pixeles. Carga la imagen en la variable `cat` y luego crea arreglo `cat_np` utilizando `cat`. Verifica que `cat_np.shape = (2160, 1280, 3)`. ###Code cat = Image.open(os.path.join("images","gatito_original.png")) cat_np = np.array(cat) print(cat_np.shape) ###Output _____no_output_____ ###Markdown 2.6 Convierte todos los valores de `cat_np` a valores pares. Esto lo puedes hacer sumando 1 a cada valor de arreglo si es impar ###Code cat_np = cat_np%2 +cat_np cat_np.shape ###Output _____no_output_____ ###Markdown 2.7 Itera por canal RGB de `cat_np` y en cada capa suma los valores de uno de los arreglos de `my_img_split`. ###Code for channel in range(cat_np.shape[2]): cat_np[:,:,channel] = cat_np[:,:,channel] + my_img_split[channel] ###Output _____no_output_____ ###Markdown 2.8 Crea una variable llamada `cat_secret_im` con `Image.fromarray` y la variable `cat_np` (que ya ha sido modificada). Luego guarda la imagen en la carpeta `images` con el nombre `my_secret.png`. ###Code cat_secret_im = Image.fromarray(cat_np) cat_secret_im.save(os.path.join("images", "my_secret.png")) ###Output _____no_output_____ ###Markdown 2.9 Crea una función llamada `imagenception()` que como argumento tenga la ruta de la imagen que quieres descifrar y que descifre la imagen secreta recientemente creada. Hint: Utiliza todos los pasos de la primera parte. ###Code def imagenception(filepath): ''' Función que decodifica una imagen oculta en otra. input: Ruta de una imagen en una carpeta, la cuál tiene una imagen oculta output: Imagen oculta en la imagen original ''' imagen = Image.open(filepath) imagen_np = np.array(imagen) secret_list= [] for i in range(3): secret_aux=np.zeros((imagen_np.shape[0],imagen_np.shape[1])) secret_aux=(imagen_np[:,:,i]%2)*250 secret_aux.astype(np.uint8) secret_list.append(secret_aux) secret_np = np.concatenate((secret_list[0],secret_list[1],secret_list[2]),axis=1) secret_img = Image.fromarray(secret_np) return secret_img my_secret_img = imagenception(os.path.join("images", "my_secret.png")) my_secret_img ###Output _____no_output_____ ###Markdown II.- Analizando la FelicidadEste ejercicio es netamente análisis de datos, tratando de abarcar problemas típicos como la lectura de datos, corrección de errores, métricas agrupadas, unión de datos, etc. Utilizaremos un conjunto de datos llamado __World Happiness Report__ disponible en el siguiente [link](https://www.kaggle.com/unsdsn/world-happiness), de donde se puede obtener información al respecto. ContextThe World Happiness Report is a landmark survey of the state of global happiness. The first report was published in 2012, the second in 2013, the third in 2015, and the fourth in the 2016 Update. The World Happiness 2017, which ranks 155 countries by their happiness levels, was released at the United Nations at an event celebrating International Day of Happiness on March 20th. The report continues to gain global recognition as governments, organizations and civil society increasingly use happiness indicators to inform their policy-making decisions. Leading experts across fields – economics, psychology, survey analysis, national statistics, health, public policy and more – describe how measurements of well-being can be used effectively to assess the progress of nations. The reports review the state of happiness in the world today and show how the new science of happiness explains personal and national variations in happiness. ContentThe happiness scores and rankings use data from the Gallup World Poll. The scores are based on answers to the main life evaluation question asked in the poll. This question, known as the Cantril ladder, asks respondents to think of a ladder with the best possible life for them being a 10 and the worst possible life being a 0 and to rate their own current lives on that scale. The scores are from nationally representative samples for the years 2013-2016 and use the Gallup weights to make the estimates representative. The columns following the happiness score estimate the extent to which each of six factors – economic production, social support, life expectancy, freedom, absence of corruption, and generosity – contribute to making life evaluations higher in each country than they are in Dystopia, a hypothetical country that has values equal to the world’s lowest national averages for each of the six factors. They have no impact on the total score reported for each country, but they do explain why some countries rank higher than others. 2.1 Lectura de datos ###Code # libraries import numpy as np import os import pandas as pd pd.set_option("display.max_columns", 999) # Permite mostrar hasta 999 columnas de un DataFrame en Jupyter. ###Output _____no_output_____ ###Markdown En la carpeta `data/world-happiness` se disponen de tres archivos, uno por cada reporte anual (años 2015, 2016 y 2017). No es de sorprender que envíen un archivo por año (podría ser mensual, semestral, etc.), lo imortante es ser capaces de leer una cantidad __variable__ de archivos al mismo tiempo. Una buena práctica es crear un diccionario de dataframes. ###Code # Comprehension dictionary df_dict = { year: pd.read_csv(os.path.join("data", "world-happiness", f"{year}.csv")) for year in [2015, 2016, 2017] } ###Output _____no_output_____ ###Markdown Por ejemplo, se puede acceder al DataFrame asociado al archivo `data/world-happiness/2016.csv` de la siguiente manera: ###Code df_dict[2017] ###Output _____no_output_____ ###Markdown Una pequeña descripción de las columnas* `Country` Name of the country.* `Region` Region the country belongs to.* `Happiness Rank` Rank of the country based on the Happiness Score.* `Happiness Score` A metric measured in 2015 by asking the sampled people the question: "How would you rate your happiness on a scale of 0 to 10 where 10 is the happiest."* `Standard Error` The standard error of the happiness score.* `Economy (GDP per Capita)` The extent to which GDP contributes to the calculation of the Happiness Score.* `Family` The extent to which Family contributes to the calculation of the Happiness Score* `Health (Life Expectancy)` The extent to which Life expectancy contributed to the calculation of the Happiness Score* `Freedom` The extent to which Freedom contributed to the calculation of the Happiness Score.* `Trust (Government Corruption)` The extent to which Perception of Corruption contributes to Happiness Score.* `Generosity` The extent to which Generosity contributed to the calculation of the Happiness Score.* `Dystopia Residual` The extent to which Dystopia Residual contributed to the calculation of the Happiness Score. Notar que los conjuntos de datos no poseen las mismas columnas, por lo tanto, solo se trabajarán con las columnas en común y posteriormente agregaremos el año con tal de concatenar los tres conjuntos. ###Code [df_i.columns.values for df_i in df_dict.values()] from functools import reduce intersection_columns = reduce(np.intersect1d, [df_i.columns.values for df_i in df_dict.values()]).tolist() print(intersection_columns) ###Output _____no_output_____ ###Markdown Explica con tus palabras las operaciones que se realizaron para obtener la variable `intersection_columns`. __Respuesta:__ Se crea una lista que contiene 3 arrays, cada uno con las columnas de los 3 dataframes para cada año, con la función reduce(), que recibe como parametro una funcion, en este caso np.intersect1d, la cuál encuentra la intersección de arrays, la cuál es aplicada a un iterable, en este caso la lista con 3 arrays, de esta forma finalmente pasamos el array a lista con tolist() 2.2 Concatenación y procesado Define el DataFrame `happiness` tal que:* Sea la concatenación de los dataframes de `df_dict` - Nota que en la documentación de `pd.concat` puedes entregar como argumento directamente un diccionario. - No ordenes los _axis_ (ver documentación). - Los nombres de los _levels_ para los multi-index resultante deben ser `["Year", "drop_me"]`.* Aplica el método `drop_level` con tal de eliminar el nivel del multi-index llamado `drop_me`.* Resetea los índices.* Selecciona solo las columnas de la lista `intersection_columns`.* Los nombres de las columnas deben estar en minísculas, reemplazar espacios por guiones bajos (`_`) y elimina los paréntesis. ###Code happiness = ( pd.concat(df_dict).loc[:, intersection_columns].rename(columns=lambda x: x.lower().replace(" ", "_")) ) happiness ###Output _____no_output_____ ###Markdown 2.3 Análisis Como siempre, partimos con un análisis descriptivo simple. ###Code happiness.describe(include="all").fillna("").T ###Output _____no_output_____ ###Markdown ¿Cuántos países no tienen mediciones de felicidad en los tres años del estudio? ¿Cuáles son? ###Code c2015= set(happiness['country'][2015].unique()) c2016= set(happiness['country'][2016].unique()) c2017= set(happiness['country'][2017].unique()) c2015.union(c2016).union(c2017)-(c2017&c2016&c2015) ###Output _____no_output_____ ###Markdown __Respuesta:__ 20 paises, los cuales se especifican arriba Note que la lista de países proveniente de la pregunta anterior tiene errores de consistencia, por ejemplo están los registros de `Hong Kong` y `Hong Kong S.A.R., China` que escencialmente son el mismo. Lo mismo ocurre con `Taiwan` y `Somaliland Region`. Modifique la columna `country` del dataframe `happiness` con tal de reparar los errores de `Hong Kong`, `Taiwan` y `Somaliland Region`. ###Code bad_country_names_dict = {"Hong Kong S.A.R., China": "Hong Kong", "Somaliland region":"Somalia", "Somaliland Region":"Somalia", "Taiwan Province of China": "Taiwan"} happiness = happiness.assign(country= lambda x: x['country'].replace(bad_country_names_dict)) ###Output _____no_output_____ ###Markdown Luego de la modificación, ¿Cuántos países no tienen mediciones en los tres años de estudio? ###Code c2015_c= set(happiness['country'][2015].unique()) c2016_c= set(happiness['country'][2016].unique()) c2017_c= set(happiness['country'][2017].unique()) c2015_c.union(c2016_c).union(c2017_c)-(c2017_c&c2016_c&c2015_c) ###Output _____no_output_____ ###Markdown __Respuesta:__ 13 paises Pivotea el dataframe `happines` tal que los índices sean los años, las columnas los países y el valor su `happiness_score`. LLena los valores nulos con un _string_ vacío `""`. Un país no puede tener más de un registro por año, por lo que puedes utilizar directamente el médoto `pd.DataFrame.pivot()`. ###Code # Se crea una gran lista con los años para agregarla al dataframe list_2015=[] list_2016=[] list_2017=[] for i in range(df_dict[2015].shape[0]): list_2015.append(2015) for i in range(df_dict[2016].shape[0]): list_2015.append(2016) for i in range(df_dict[2017].shape[0]): list_2015.append(2017) list_tot= list_2015 + list_2016 + list_2017 happiness['year']=list_tot h_piv_score= happiness.pivot_table(index='year' , columns="country", values="happiness_score", fill_value='') h_piv_score h_piv_rank= happiness.pivot_table(index='year' , columns="country", values="happiness_rank", fill_value='') h_piv_rank ###Output _____no_output_____ ###Markdown ¿Qué información podrías sacar rápidamente de esta tabla pivoteada? ¿Podrías decir que siempre es útil pivotear una tabla? __Respuesta:__ cuáles son los paises que no presentan mediciones en ciertos años. y cuál fue el happiness_score y el happiness_rank de cada pais en cada año En promedio, ¿Cuáles son los tres países con el mejor ranking de felicidad? ###Code h_piv_rank.mean().nsmallest(3) ###Output _____no_output_____ ###Markdown __Respuesta:__ * 1) Denmark* 2) Switzerland* 3) Iceland En promedio, ¿Cuáles son los tres países con el mayor _score_ de felicidad? ¿Son distintos a los con mejor ranking en promedio? ###Code h_piv_score.mean().nlargest(3) ###Output _____no_output_____ ###Markdown __Respuesta:__ * 1) Switzerland* 2) Denmark* 3) Iceland Calcula el promedio anual de todas las columnas factores de felicidad, es decir, todas las variables numéricas excepto `happiness_score` y `happiness_rank`. ###Code happ= happiness.drop(['happiness_score','happiness_rank'],axis=1).groupby(['year']).mean() happ ###Output _____no_output_____ ###Markdown Respecto al cálculo anterior, para cada uno de los años, ¿Cuál es el factor que más contribuye (en promedio) al _score_ de la felicidad y en qué medida? **Tenemos que "happiness_score" corresponde a la suma de todos los otrs factores, por lo que en promedio el factor más influyente corresponde a "dystopia_residual" para todos los años** __Resuesta:__* 2015: "dystopia_residual"* 2016: "dystopia_residual"* 2017: "dystopia_residual" 2.4 Agregando más datos A continuación, agregaremos un nuevo conjunto de datos, el que contiene estadísticas de suicidio por años, países y rangos etáreos. Se encuentra disponible en el siguiente [link](https://www.kaggle.com/russellyates88/suicide-rates-overview-1985-to-2016). ###Code suicide = pd.read_csv(os.path.join("data", "suicide_rates.csv")) suicide.head() ###Output _____no_output_____ ###Markdown La mayoría de las columnas son autoexplicativas.* `country`* `year`* `sex`* `age`* `suicides_no`* `population`* `suicides/100k pop`* `country-year`* `HDI for year` Human Development Index* `gdp_for_year ($)` Gross Domestic Product* `gdp_per_capita ($)`* `generation` based on age grouping average Un poco de estadística descriptiva. ###Code suicide.describe(include="all").fillna("").T ###Output _____no_output_____ ###Markdown Crea un nuevo DataFrame llamado suicide_agg siguiendo las siguientes instrucciones:* Agrupa por país y año.* Suma la población y el número de suicidios.* Resetea los índices.* Agrega una nueva columna llamada `suicides_ratio_100k` formada por la división de `suicides_no` y `population`, para posteriormente muliplicarla por 100,000.* Agrega una nuevale columna llamada `suicides_rank` similar a `happiness_rank`, es decir, que asigne un orden __por año__ a cada país según la columna `suicides_ratio_100k` tal que el rank 1 corresponda al que tenga mayor `suicides_ratio_100k`. Hint: Usa el método `rank()`. ###Code suicide # Es posible hacer todas las operaciones encadenadas! suicides_agg = ( suicide.groupby([suicide['country'], suicide['year']]) .agg( {'suicides_no': 'sum' , 'population':'sum'} ) .reset_index() .assign( suicides_ratio_100k= lambda x: x['suicides_no']/x['population'], suicides_rank= lambda x: x['suicides_ratio_100k'].rank(ascending=False) ) ) suicides_agg ###Output _____no_output_____ ###Markdown Crea un nuevo DataFrame con el nombre `hap_sui` al unir `happiness` y `suicides_agg` tal que coincidan país y año, quédate con solo los registros que coincidan en ambas DataFrames. ###Code hap_sui = pd.merge(happiness,suicides_agg,on=['country','year']) hap_sui ###Output _____no_output_____ ###Markdown ¿Qué tipo de correlación lineal hay entre las variables `happiness_rank` y `suicides_rank`? ###Code hap_sui.loc[:, ["happiness_rank", "suicides_rank"]].corr() ###Output _____no_output_____ ###Markdown __Respuesta:__ **Como los valores son muy cercanos a 0, podemos concluir que no existe correlación lineal entre las variables** ¿Qué tipo de correlación lineal hay entre las variables `happiness_rank` y `suicides_rank` por cada año? ###Code hap_sui_2015= hap_sui[hap_sui['year']==2015] hap_sui_2016= hap_sui[hap_sui['year']==2016] hap_sui_2015.loc[:, ["happiness_rank", "suicides_rank"]].corr() hap_sui_2016.loc[:, ["happiness_rank", "suicides_rank"]].corr() ###Output _____no_output_____ ###Markdown __Respuesta:__ **En ninguno de los casos existe correlación lineal entre las variables** ¿La respuesta de las dos preguntas anteriores cambia si se utilizan las variables `happiness_score` y `suicides_ratio_100k`? ###Code hap_sui_2015.loc[:, ["happiness_score", "suicides_ratio_100k"]].corr() hap_sui_2016.loc[:, ["happiness_score", "suicides_ratio_100k"]].corr() ###Output _____no_output_____ ###Markdown __Respuesta:__ **Como el coeficiente es aún mas cercano a cero, no hay correlación lineal entre las variables** III.- Índices de Costos de Vida Estos índices están ajustados a la Ciudad de Nueva York (NYC). Lo que significa que para la Ciudad de Nueva York, cada índice debería marcar 100(%). Si otra ciudad tiene, por ejemplo, un índice de alquiler de 120, significa que en esa ciudad se paga de media por el alquiler un 20% más que en Nueva York. Si una ciudad tiene un índice de alquiler de 70, significa que en esa ciudad los alquileres son de media un 30% más baratos que en Nueva York.* El Índice de Costo de Vida (Sin Alquiler) es un indicador relativo de los precios de bienes de consumo, incluyendo comestibles, restaurantes, transporte y servicios. El Índice de Costo de Vida no incluye gastos de residencia como alquileres o hipotecas. Si una ciudad tiene un Costo de Vida de 120, significa que Numbeo estima que es un 20% más cara que Nueva York (sin contar alquiler).* El Índice de Alquiler es una estimación de precios de alquiler de apartamentos de una ciudad comparada con Nueva York. Si el Índice de Alquiler es 80, Numbeo estima que el precio de los alquileres en esa ciudad es de media un 20% más barato que en Nueva York.* El Índice de Comestibles es una estimación de los precios de la compra de una ciudad en comparación con Nueva York. Para calcular esta sección, Numbeo utiliza el peso de los artículos en la sección "Mercados" por cada ciudad.* El Índice de Restaurantes es una comparación de precios de comidas y bebidas en bares y restaurantes en comparación con NY.* El Índice de Costo de Vida más Alquiler es una estimación de precios de consumo incluyendo alquiler en comparación con la Ciudad de Nueva York.* El Poder Adquisitivo Local muestra la capacidad adquisitiva relativa a la hora de comprar bienes y servicios en una ciudad determinada, con relación al salario medio de la ciudad. Si el poder adquisitivo doméstico es 40, significa que los habitantes de dicha ciudad con salario medio pueden permitirse comprar una media de 60% menos bienes y servicios que los habitantes de Nueva York con salario medio. Para más información sobre los pesos utilizados (fórmula completa) puedes visitar: [motivación y metodología](https://es.numbeo.com/coste-de-vida/motivaci%C3%B3n-y-metodolog%C3%ADa). Para comenzar es necesario instalar el paquete `lxml` en tu entorno virtual de conda para poder descargar los datos. Basta con ejecutar `conda install -n mat281 lxml`O cambia `mat281` por el ambiente que estés utilizando. Se disponibiliza a continuación la carga de datos de un dataframe. ###Code import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns pd.set_option('display.max_columns', 999) %matplotlib inline years = [2015, 2016, 2017, 2018, 2019, 2020] life_cost = ( pd.concat( { year: ( pd.read_html(f"https://www.numbeo.com/cost-of-living/rankings.jsp?title={year}")[1] .rename(columns=lambda x: x.lower().replace(" ", "_")) .assign(rank=lambda x: x.index + 1) .set_index("rank") ) for year in years } ) .rename_axis(["year", "rank"]) .reset_index() ) life_cost ###Output _____no_output_____ ###Markdown Ejercicio 3.1 Explique lo que se hizo en la celda anterior detalladamente. **Se concatenan 6 distintos diccionarios de una página web, correspondientes a datos de entre los años 2015 al 2020. Los nombres de las columnas son escritos todos con letras minusculas y los espacios son reemplazados por "_". Se crea la columna Rank la cual corresponde al indice del dataframe + 1. Para finalizar, se cambian los nombres de las primeras 2 columnas nombrandolas rank y year, y se resetean los índices** Ejercicio 3.2 Genera un histograma del índice del costo de vida (sin alquiler) para cada año (es decir, 6 histogramas).¿Qué conclusión puedes sacar de estos gráficos? ###Code #Se separan los DataFrames para cada año df_2015=life_cost[life_cost['year']==2015] df_2016=life_cost[life_cost['year']==2016] df_2017=life_cost[life_cost['year']==2017] df_2018=life_cost[life_cost['year']==2018] df_2019=life_cost[life_cost['year']==2019] df_2020=life_cost[life_cost['year']==2020] ###Output _____no_output_____ ###Markdown Histogramas ###Code fig, axes = plt.subplots(2, 3, figsize=(18,10)) sns.distplot(df_2015['cost_of_living_index'], kde = False, ax=axes[0][0]).set(ylim=(0, 120), xlim=(20,160)) axes[0][0].set_title("Indice de costo de vida año 2015") sns.distplot(df_2016['cost_of_living_index'], kde = False, ax=axes[0][1]).set(ylim=(0, 120), xlim=(20,160)) axes[0][1].set_title("Indice de costo de vida año 2016") sns.distplot(df_2017['cost_of_living_index'], kde = False, ax=axes[0][2]).set(ylim=(0, 120), xlim=(20,160)) axes[0][2].set_title("Indice de costo de vida año 2017") sns.distplot(df_2018['cost_of_living_index'], kde = False, ax=axes[1][0]).set(ylim=(0, 120), xlim=(20,160)) axes[1][0].set_title("Indice de costo de vida año 2018") sns.distplot(df_2019['cost_of_living_index'], kde = False, ax=axes[1][1]).set(ylim=(0, 120), xlim=(20,160)) axes[1][1].set_title("Indice de costo de vida año 2019") sns.distplot(df_2020['cost_of_living_index'], kde = False, ax=axes[1][2]).set(ylim=(0, 120), xlim=(20,160)) axes[1][2].set_title("Indice de costo de vida año 2020") ###Output _____no_output_____ ###Markdown Analisis por Ciudad **Se observa que en el año 2017, por algún motivo, paises con costo de vida bajo aumentaron su costo de vida, sin ambargo, siguen siendo más barato que nuestra ciudad de referencia. Para el año 2020 ya dejan de existir ciudades con costo de vida tan elevados (60% más que la ciudad de referencia)** Ejercicio 3.3 Grafica el índice de restaurantes a través de los años para diez ciudades escogidas pseudo-aleatoriamente (variable `my_cities` de la celda siguiente) en un mismo gráfico. Recuerda escoger el tipo de gráfico adecuadamente.¿Ves alguna relación? ¿Qué podrías decir del gráfico? ¿Por qué no graficar todas las ciudades en lugar de solo escoger algunas? ###Code rol_seed = 201710507 # Escribe tu rol UTFSM sin número verificador my_cities = life_cost["city"].drop_duplicates().sample(n=10, random_state=rol_seed).values list(my_cities) df_cities=life_cost[life_cost["city"]==list(my_cities)[0]] for i in range(1,len(my_cities)): df_cities= pd.concat((df_cities, life_cost[life_cost["city"]==list(my_cities)[i]]),axis=0, ignore_index=True) df_cities_2015=df_cities[df_cities['year']==2015] df_cities_2016=df_cities[df_cities['year']==2016] df_cities_2017=df_cities[df_cities['year']==2017] df_cities_2018=df_cities[df_cities['year']==2018] df_cities_2019=df_cities[df_cities['year']==2019] df_cities_2020=df_cities[df_cities['year']==2020] from matplotlib.pyplot import figure figure(num=None, figsize=(16, 12), dpi=80, facecolor='w', edgecolor='k') # grafico de linea palette = sns.color_palette("hls", 6) g=sns.lineplot( x='year', y='restaurant_price_index', hue='city',# color por Generation data=df_cities, ci = None, ) ###Output _____no_output_____ ###Markdown La tendencia indica que el indice de restaurant al avanzar en el tiempo es decreciente o bien se mantiene en el tiempo para las ciudades de estudio, siendo todas más economicas que la ciudad de referencia. Si se consideran todas las ciudades, se pierde la legibilidad y por lo tanto la interpretación del gráfico Ejercicio 3.4Genera un mapa de calor tal que:- El eje horizontal corresponda a cada uno de los índices.- El eje vertical corresponda a cada una de las ciudades de `my_cities`.- El color y valor en cada celda sea el promedio de los indicadores. - El valor de la celda debe tener solo dos decimales. ###Code df_prom=df_cities.drop(['year','rank'],axis=1).groupby(['city']).mean() df_prom redable_index_names = { 'cost_of_living_index': 'Costo de Vida', 'rent_index': 'Alquiler', 'cost_of_living_plus_rent_index': 'Costo de Vida + Alquiler', 'groceries_index': 'Comestibles', 'restaurant_price_index': 'Restaurantes', 'local_purchasing_power_index': 'Poder Adquisitivo Local' } indices = list(redable_index_names.values()) indices import numpy as np import matplotlib import matplotlib.pyplot as plt from mpl_heatmap import heatmap, annotate_heatmap cities = list(df_cities['city'].unique()) indices = list(redable_index_names.values()) valores = df_prom.values #graficos fig, ax = plt.subplots(figsize=(10, 10)) im, cbar = heatmap( valores, # valores cities, # filas indices, # columnas ax=ax, # ventana cmap="YlGn", # gama de colores cbarlabel="harvest [t/year]" # nombre barra de colores ) texts = annotate_heatmap(im, valfmt="{x:.2f}") fig.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown En general ciudades con un poder adquisitivo local mayor, presentan costos mayores, sin embargo siguen siendo más economicos que la ciudad de referencia. Ejercicio 3.5Primero, agregar la columna `country` al dataframe `life_cost` a partir de la columna `city`. Luego, realizar un scatter plot donde:- Datos correspondientes al año 2020.- El eje horizontal corresponda a el Índice de Comestibles.- El eje vertical corresponda a el Poder Adquisitivo Local.- El color corresponda al país.- Debe contener solo 20 países que son escogidos pseudo-aleatoriamente. - Para ellos deber agregar la columna `country` al dataframe `life_cost`. - Ejecutar la celda sub-siguiente para generar el _np.array_ `my_countries`.- Se debe utilizar un esquema de color distinto (_color scheme_ o _colormap_), puesto que hay 20 categorías. - En `altair` utilizar el esquema `category20`. [Más información aquí](https://altair-viz.github.io/user_guide/customization.html?highlight=color%20mapcolor-schemes). - En `matplotlib` utilizar el esquema `tab20`. [Más información aquí](https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html). - En caso que los puntos del scatter plot se vean muy pequeños en relación al gráfico debes aumentar su tamaño. ###Code #a través de la columna city se obtienen todos los paises paises=[] for i in life_cost['city']: aux= i.split(',') paises.append(aux[-1].strip()) life_cost['country'] = paises life_cost my_countries = life_cost.query("year == 2020")["country"].drop_duplicates().sample(n=20, random_state=rol_seed).values df_countries=life_cost[life_cost["country"]==list(my_countries)[0]] for i in range(1,len(my_countries)): df_countries= pd.concat((df_countries, life_cost[life_cost["country"]==list(my_countries)[i]]),axis=0, ignore_index=True) df_countries life_cost countries_2020=life_cost[life_cost["city"]==list(my_countries)[0]] for i in range(1,len(my_countries)): countries_2020= pd.concat((countries_2020, life_cost[life_cost["country"]==list(my_countries)[i]]),axis=0, ignore_index=True) countries_2020 import altair as alt from vega_datasets import data alt.Chart(countries_2020).mark_point().encode( x='groceries_index', y='local_purchasing_power_index', color=alt.Color('country', scale=alt.Scale(scheme='category20')) ) ###Output _____no_output_____ ###Markdown ¿Qué comentarios puedes entregar al comparar los países seleccionados? Vemos que se presenta una gran cantidad de ciudades en paises como Australia, China, Germany e India, donde se tiene que a excepción de Australia, todos estos paises presentan un groceries_index menor a un 60%, a pesar de que dentro de los paises mencionados hay algunos con gran poder adquisitivo local. Ejercicio 3.6El siguiente ejercicio necesita de un conjunto de datos adicional, que relacione el país con el continente. A continuación se disponibiliza el dataframe `countries` con las columnas `country` y `continent`.Agrega la columna `continent` al dataframe `life_cost` realizando un _merge_ con `countries`. ###Code rename_countries_dict = { "Czechia": "Czech Republic", "Bosnia and Herzegovina": "Bosnia And Herzegovina", "Kosovo": "Kosovo (Disputed Territory)", "North Macedonia": "Macedonia", "Trinidad and Tobago": "Trinidad And Tobago" } countries = ( pd.read_html("http://www.geonames.org/countries/", keep_default_na=False)[1] .rename(columns=lambda x: x.lower()) .assign(country=lambda x: x["country"].replace(rename_countries_dict)) .loc[:, ["country", "continent"]] ) countries.head() life_cost=life_cost.merge(countries,on='country') life_cost ###Output _____no_output_____ ###Markdown A continuación genera un gráfico que posea 36 subgráficos, estos se generan realizando todas las permutaciones (con repetición) de dos índices. Cada sub-gráfico:- Debe corresponder solo al año 2020.- Debe ser un scatter plot.- Los ejes horizontal y vertical corresponden al par de índices de la permutación- El color de cada punto se corresponde al continente.- La opacidad de cada punto debe ser `0.3`. ###Code life_cost_2020=life_cost[life_cost['year']== 2020].reset_index().drop(['index'],axis=1) life_cost_2020 from itertools import product permutaciones= product(life_cost.columns[3:-2], repeat=2) plt.figure(figsize=(30,30)) life_cost_2020 = life_cost[life_cost['year']==2020] j=1 for perm in permutaciones: plt.subplot(6,6,j) for cont in life_cost_2020['continent'].unique(): plt.scatter(life_cost_2020[life_cost_2020['continent']==cont].loc[:, perm[0]], life_cost_2020[life_cost_2020['continent']==cont].loc[:, perm[1]], cmap='tab20', label= cont, alpha=0.3 ) plt.legend(frameon=True, title='Continentes') plt.xlabel(str(perm[0])) plt.ylabel(str(perm[1])) j+=1 plt.show() ###Output _____no_output_____
lect6/.ipynb_checkpoints/early-stopping-checkpoint.ipynb
###Markdown Многие люди используют статистическую проверку гипотез, даже не подозревая об этом. Понятно, что когда кто-то проводит A / B-тест, формально проверяется статистическая гипотеза. Есть достаточно много грабель, на которые можно наступить при проверке гипотез, некоторые из них очевидны (или будут очевидны - проведите достаточно тестов, и вы что-нибудь найдете!), Некоторые из них более тонкие/менее понятные. Один из наиболее хитрых моментов заключается в том, что вы не можете преждевременно прекратить эксперимент.Когда вы начали сбор данных и вдруг на второй день увидели, что p-value стало меньше чем $\alpha$, сразу хочется кричать: "Ура, эта новая кнопка работает! Люди подписываются в рекордном количестве!". На самом деле это чепуха. Попробуем разобраться почему.Хотя использование A / B-тестирования такое "новомодное", само по себе тестирование гипотез существует очень давно. Как минимум, его постоянно используют (в том числе неправильно) в медицинских исследованиях и в области физики высоких энергий. Это означает, что значительное количество ловушек уже было обнаружено ранее, и вам при проведении A / B нет необходимости открывать их заново. Вот недавний пример от AirBnB: [Эксперименты в AirBnB](http://nerds.airbnb.com/experiments-at-airbnb/). Аналитики там пытаются останавливать эксперименты пораньше, тут кратко описано, какую методологию они для этого используют. В сообщении недостаточно информации, чтобы показать, что их метод работает, но давайте предположим, что это так. Одна вещь, о которой не говорится, - это то, как ранняя остановка влияет на мощность вашего теста. ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt import scipy as sp import scipy.stats as stats import random random.seed(12345) np.random.seed(5345436) ###Output _____no_output_____ ###Markdown Первое, что нам понадобится - выборка. Пусть в эксперименте вы измеряете коэффициент конверсии на своем веб-сайте. Средняя конверсия - 6%. Группа А будет иметь средний коэффициент конверсии 6%, используя параметр `difference`, мы контролируем, насколько большой должна быть разница между двумя выборками. Это позволяет нам генерировать набор наблюдений, в которых истинная разница равна нулю или любому другому значению, которое мы хотели бы исследовать. Мы также можем установить размер выборки (количество наблюдений) `N`.Будем использовать t-критерий Стьюдента для расчета p-значения для каждого эксперимента. В этом примере мы хотим знать, улучшили ли изменения на нашем веб-сайте коэффициент конверсии или нет. p-значение - это вероятность того, что мы при отсутствии разницы между выборками получим такое же или еще более экстремальное значение статистики, как в эксперименте. В этом случае вы (как будто бы) можете рассчитать p-значение, используя [t-test Стьюдента](http://en.wikipedia.org/wiki/Student%27s_t-test). ###Code def two_samples(difference, N=6500, delta_variance=0.): As = np.random.normal(6., size=N) Bs = np.random.normal(6. + difference, scale=1+delta_variance, size=N) return As, Bs def one_sided_ttest(A, B, equal_var=True): t,p = stats.ttest_ind(A, B, equal_var=equal_var) # t-тест, реализованный в scipy, двусторонний, но нас интересует # односторонняя гипотеза (поделим пополам) if t < 0: p /= 2. else: p = 1- p/2. return p ###Output _____no_output_____ ###Markdown Пора проводить эксперимент. Предположим, что всего вы собираете по 100 наблюдений в каждой группе. Чтобы увидеть, что происходит с p-значением по мере сбора данных, мы построим его график после получения каждого нового наблюдения в каждой группе. ###Code # Разницы между группами нет As, Bs = two_samples(0., N=100) p_vals = [] for n in range(len(As)-1): n += 2 p_vals.append(one_sided_ttest(As[:n], Bs[:n])) a = plt.axes() a.plot(np.arange(len(As)-1)+2, p_vals) a.set_ylabel("Наблюдаемые p-value") a.set_xlabel("Кол-во наблюдений") a.set_ylim([0., 1.]) a.hlines(0.05, 0, 100) ###Output _____no_output_____ ###Markdown Как видите, в ходе эксперимента оно немного меняется. Хотя в данном конкретном случае оно ни разу не стало меньше 0,05, вам не нужно слишком часто перезапускать эту ячейку, чтобы найти ту, где она есть.Если бы вам пришлось провести большое количество экспериментов, в которых вы показываете одну и ту же веб-страницу обеим группам и наносите на график значение p, которое вы видите, вы бы получили что-то такое: ###Code def repeat_experiment(repeats=10000, diff=0.): p_values = [] for i in range(repeats): A,B = two_samples(diff, N=100) p = one_sided_ttest(A,B) p_values.append(p) plt.hist(p_values, range=(0,1.), bins=20) plt.axvspan(0., 0.1, facecolor="red", alpha=0.5) plt.xlabel("Наблюдаемые p-value") plt.ylabel("Кол-во") repeat_experiment() ###Output _____no_output_____ ###Markdown Как видите, ~ 10% наблюдаемых p-значений попадают в красную область. Точно так же 5% ваших экспериментов дадут вам p-значение менее 0,05. И все это несмотря на отсутствие разницы!Это подводит нас к тому, что именно делает для вас процедура проверки гипотез. Принимая решение о том, что победила группа В, если мы видим p-значение меньше 0,05, вы говорите: в долгосрочной перспективе, после многократных повторений этого эксперимента, я дам "ложноположительный" результат не более чем в 5% случаев (ошибки первого рода).Заметьте, тут нет никакой информации о том, насколько вероятно, что вы действительно что-то нашли. Тут не говорится, правильный ли выбор вы сделали в конкретном случае. Только то, что в конечном итоге, при многократном повторении этого эксперимента вы будете ошибаться в 5% случаев.Итак, при чем тут ранняя остановка? ###Code def repeat_early_stop_experiment(repeats=1000, diff=0.): p_values = [] for i in range(repeats): A,B = two_samples(diff, N=100) for n in range(len(A)-1): n += 2 p = one_sided_ttest(A[:n], B[:n]) if p < 0.05: break p_values.append(p) plt.hist(p_values, range=(0,1.), bins=20) plt.axvspan(0., 0.05, facecolor="red", alpha=0.5) plt.xlabel("Наблюдаемые p-value") plt.ylabel("Кол-во") return p_values p_values = repeat_early_stop_experiment() sum(np.array(p_values)<0.05) ###Output _____no_output_____ ###Markdown Вы видите маленькое p-value и останавливаете эксперимент, говорите, что победила группа В и идете праздновать. А на самом деле вы видите p-value < 0,05 во много раз чаще, чем 5% всех экспериментов (1000 * 0,05 = 50). В конечном итоге вы будете ошибаться намного чаще чем в 5% случаев.Что такое мощность? Мощность теста - вероятность того, что вы увидите изменение там, где оно есть (1 - FNR). Очевидно это должно зависеть от того, насколько велика разница между группами А и В и того, сколько наблюдений вы сделаете. Чем больше улучшение, тем легче его обнаружить. Если B увеличит ваш коэффициент конверсии с 6 до 20%, вы заметите это намного легче, чем если бы он изменился с 6 до 7%.Давайте посмотрим, что произойдет, если вы будете останавливать эксперименты раньше времени. ###Code def keep_or_not(improvement, threshold=0.05, N=100, repeats=1000, early_stop=False): keep = 0 for i in range(repeats): A,B = two_samples(improvement, N=N) if early_stop: for n in range(len(A)-1): n += 2 p = one_sided_ttest(A[:n], B[:n]) if p < 0.05: break else: p = one_sided_ttest(A, B) if p <= threshold: keep += 1 return float(keep)/repeats def power_plot(improvements, normal_keeps, early_keeps): plt.plot(improvements, normal_keeps, "bo", label="normal") plt.plot(improvements, early_keeps, "r^", label="early") plt.legend(loc="best") plt.ylim((0, 100)) plt.xlim((0, improvements[-1]*1.1)) plt.grid() plt.xlabel("Величина улучшения") plt.ylabel("Доля принятых изменений (%)") plt.axhline(5) improvements = np.linspace(1., 40, 9) keeps = [] early_keeps = [] for improvement in improvements: keeps.append(keep_or_not(improvement/100.)*100) early_keeps.append(keep_or_not(improvement/100., early_stop=True)*100) power_plot(improvements, keeps, early_keeps) ###Output _____no_output_____ ###Markdown Отлично! Рано останавливая эксперимент, вы чаще меняете свою веб-страницу на альтернативную, когда действительно есть эффект! Если улучшение составляет от 6% до 7%, вероятность того, что вы правильно измените, почти в шесть раз выше.Вам кажется, что стало лучше. Но нет :( Причина в том, что мы не учитываем количество ложных срабатываний. Как мы видели ранее, если вы остановитесь раньше, вы неправильно меняете свой дизайн сайта чаще, чем в 5% случаев. Мощность теста также зависит от того, как часто вы готовы ошибаться. Если у вас нет проблем с тем, чтобы все время ошибаться, то лучшая стратегия - всегда переключаться. Вы будете правильно переключаться в 100% случаев.Каков же на самом деле процент ложных срабатываний стратегии ранней остановки? ###Code def false_positives(repeats=1000, early_stop=False, threshold=0.05): switches = 0 for i in range(repeats): A,B = two_samples(0., N=100) if early_stop: for n in range(len(A)-1): n += 2 p = one_sided_ttest(A[:n], B[:n]) if p < threshold: break else: p = one_sided_ttest(A, B) if p < threshold: switches += 1 return float(switches)/repeats print("Обычный тест:", false_positives()) print("Ранняя остановка:", false_positives(early_stop=True)) ###Output Обычный тест: 0.047 Ранняя остановка: 0.295 ###Markdown При принятии решения после того, как были собраны все наблюдения, частота ложных срабатываний действительно близка к 5%, как и обещает процедура проверки гипотез. Для метода раннего останова вы получаете около 30% ложных срабатываний!Насколько ниже мы должны сделать порог p-значения с помощью стратегии раннего останова, чтобы иметь такую же частоту ложных срабатываний в 5%? ###Code thresholds = (0.0025, 0.005, 0.01, 0.02, 0.03) fp_rates = [false_positives(threshold=p, early_stop=True) for p in thresholds] plt.plot(thresholds, fp_rates, "bo") plt.xlabel("Порог p-value") plt.ylabel("False positive rate") plt.grid() ###Output _____no_output_____ ###Markdown Порог, используемый для ранней остановки эксперимента, должен быть намного ниже 0,05, чтобы достичь того же фактического количества ложных срабатываний. Теперь мы можем повторно запустить наше сравнение мощности двух тестов (нормального и нормального с ранней остановкой). Ниже нарисованы графики мощности обоих тестов для 5% ложноположительных результатов. ###Code improvements = np.linspace(1., 40, 9) keeps = [] early_keeps = [] for improvement in improvements: keeps.append(keep_or_not(improvement/100.)*100) early_keeps.append(keep_or_not(improvement/100., early_stop=True, threshold=0.005)*100) power_plot(improvements, keeps, early_keeps) ###Output _____no_output_____
OnlinePredictiveCoding/online_predictive_coding.ipynb
###Markdown Online Predictive CodingIn this notebook, we implement the first version of our prototype for learning from varying feature spaces. Model Design- add the plot we have on the desktop computer here. ###Code import numpy as np import pandas as pd class error_module: def __init__(self, size, lr): self.w = np.zeros(size) self.lr = lr def predict(self, x): return np.dot(self.w, x) def update(self, x, y): yhat = self.predict(x) # regression loss = 0.5 * (y - yhat)**2 self.w += self.lr * (y - yhat) return loss class classifier_module: def __init__(self, size, lr): self.w = np.zeros(size) self.lr = lr def predict(self, x): return np.dot(self.w, x) def update(self, x, y): loss = np.maximum(0, 1.0 - y * np.dot(self.w, x)) if loss > 0: self.w += x * y * self.lr return loss dataset_names = ["german", "ionosphere", "spambase", "magic", "a8a"] root_path, extension = "./datasets/", "_numeric" def get_path(name): '''returns a path pair to the preprocessed datasets X and y csv files.''' path = root_path + name + extension return path + "_X.csv", path + "_y.csv" def read_dataset(X_path, y_path): '''reads and returns numpy arrays in a given pair of paths for X and y.''' X = pd.read_csv(X_path).values y = pd.read_csv(y_path)['0'].values return X, y def simulate_varying(X): # multivariate normal distribution '''Get the data and generate a varying feature space pattern. Possible concerns: thresholding messing up the distribution?''' # create a covariance matrix cov = np.random.rand(num_features, num_features) cov = np.dot(cov, cov.transpose()) # to have a positive semi-definite matrix # create a mean vector mean = np.random.rand(len(X[0])) # sample from multivariate gaussian w/ given mean and cov spaces = np.random.multivariate_normal(mean, cov, len(X)) # threshold samples for 1-hot encoding spaces[spaces < 0] = 0 spaces[spaces != 0] = 1 return spaces def simulate_random_varying(X): # discrete uniform distribution matrix = np.random.randint(2, size=(len(X), len(X[0]))) return matrix def quant(x,l): one_hot = [] for i in x: if i != 0: one_hot.append(1) else: one_hot.append(0) one_hot = np.array(one_hot) qts=[x] for i in range(l): qt = (one_hot-x) * (i+1) / l qts.append(x+qt) qts.append(one_hot) return qts X_path, y_path = get_path("ionosphere") X, y = read_dataset(X_path, y_path) num_features = len(X[0]) folds = 20 learning_rate = 0.01 # multivariate gaussian mask with threshold 0 fold_error_rates = [] predictions = [] losses = [] for f in range(folds): error_count = 0 # shuffle for each fold l = list(range(len(X))) np.random.shuffle(l) X, y = X[l], y[l] mask = simulate_varying(X) # multivariate # initialize model model = classifier_module(num_features, learning_rate) #model = error_module(num_features, learning_rate) for i in range(len(X)): # predict and suffer yhat = model.predict(X[i] * mask[i]) loss = model.update(X[i] * mask[i], y[i]) # bookkeeping predictions.append(yhat) losses.append(loss) if np.sign(yhat) != y[i]: error_count += 1 fold_error_rates.append(error_count/len(X)) print(learning_rate, np.mean(fold_error_rates)) class OnlinePredictiveCoding: def __init__(self, num_layers, num_features): self.classifier = classifier_module(num_features, learning_rate) self.num_layers = num_layer self.num_features = num_features self.error_modules = [] for i in range(num_layers - 1): -xt self.error_modules.append(error_module(num_features, learning_rate)) def forward(self, x, y): input_list = quant(x, num_layers) input_list.reverse() for i in range(len(input_list)): xi = input_list[i] if i == len(input_list) - 1: model = self.classifier else: model = self.error_modules[i] yhati = model.predict(xi) lossi = opc = OnlinePredictiveCoding(3, 20) result = quant(X[6], 2) import matplotlib.pyplot as plt plt.plot(result[0]) plt.plot(result[1]) plt.plot(result[2]) plt.plot(result[3]) ###Output _____no_output_____
jupyter-notebooks/running people committee-meeting-attendees pipeline.ipynb
###Markdown Run the dependant pipelines ###Code %%bash cd /pipelines KNESSET_LOAD_FROM_URL=1 KNESSET_DATASERVICE_INCREMENTAL= \ dpp run --no-use-cache --concurrency 2 --verbose \ ./committees/kns_committeesession,./members/mk_individual ###Output [./committees/kns_committeesession:T_0] >>> INFO :cba22646 RUNNING ./committees/kns_committeesession [./committees/kns_committeesession:T_0] >>> INFO :cba22646 Collecting dependencies [./committees/kns_committeesession:T_0] >>> INFO :cba22646 Running async task [./committees/kns_committeesession:T_0] >>> INFO :cba22646 Waiting for completion [./committees/kns_committeesession:T_0] >>> INFO :cba22646 Async task starting [./committees/kns_committeesession:T_0] >>> INFO :cba22646 Building process chain: [./committees/kns_committeesession:T_0] >>> INFO :- load_resource [./committees/kns_committeesession:T_0] >>> INFO :- knesset.dump_to_path [./committees/kns_committeesession:T_0] >>> INFO :- knesset.dump_to_sql [./committees/kns_committeesession:T_0] >>> INFO :- (sink) [./members/mk_individual:T_1] >>> INFO :e2d6f365 RUNNING ./members/mk_individual [./members/mk_individual:T_1] >>> INFO :e2d6f365 Collecting dependencies [./members/mk_individual:T_1] >>> INFO :e2d6f365 Running async task [./members/mk_individual:T_1] >>> INFO :e2d6f365 Waiting for completion [./members/mk_individual:T_1] >>> INFO :e2d6f365 Async task starting [./members/mk_individual:T_1] >>> INFO :e2d6f365 Building process chain: [./members/mk_individual:T_1] >>> INFO :- load_resource [./members/mk_individual:T_1] >>> INFO :- knesset.dump_to_path [./members/mk_individual:T_1] >>> INFO :- knesset.dump_to_sql [./committees/kns_committeesession:T_0] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./committees/kns_committeesession:T_0] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/committees/kns_committeesession/datapackage.json HTTP/1.1" 200 3751 [./members/mk_individual:T_1] >>> INFO :- (sink) [./committees/kns_committeesession:T_0] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./committees/kns_committeesession:T_0] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/committees/kns_committeesession/datapackage.json HTTP/1.1" 200 3751 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/datapackage.json HTTP/1.1" 200 14563 [./committees/kns_committeesession:T_0] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./committees/kns_committeesession:T_0] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/committees/kns_committeesession/kns_committeesession.csv HTTP/1.1" 200 41799011 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/datapackage.json HTTP/1.1" 200 14563 [./committees/kns_committeesession:T_0] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (2): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual_positions.csv HTTP/1.1" 200 7154433 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (2): storage.googleapis.com:80 [./committees/kns_committeesession:T_0] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/committees/kns_committeesession/kns_committeesession.csv HTTP/1.1" 200 41799011 [./committees/kns_committeesession:T_0] >>> INFO :load_resource: INFO :loaded 10000 rows [./committees/kns_committeesession:T_0] >>> INFO :load_resource: INFO :loaded 20000 rows [./committees/kns_committeesession:T_0] >>> INFO :load_resource: INFO :loaded 30000 rows [./committees/kns_committeesession:T_0] >>> INFO :load_resource: INFO :loaded 40000 rows [./committees/kns_committeesession:T_0] >>> INFO :load_resource: INFO :loaded 50000 rows [./committees/kns_committeesession:T_0] >>> INFO :load_resource: INFO :loaded 60000 rows [./committees/kns_committeesession:T_0] >>> INFO :load_resource: INFO :loaded 70000 rows [./committees/kns_committeesession:T_0] >>> INFO :load_resource: INFO :Processed 74409 rows [./committees/kns_committeesession:T_0] >>> INFO :cba22646 DONE /usr/local/lib/python3.6/site-packages/datapackage_pipelines/specs/../lib/load_resource.py [./committees/kns_committeesession:T_0] >>> INFO :knesset.dump_to_path: INFO :Processed 74409 rows [./committees/kns_committeesession:T_0] >>> INFO :knesset.dump_to_sql: INFO :Processed 74409 rows [./committees/kns_committeesession:T_0] >>> INFO :cba22646 DONE /usr/local/lib/python3.6/site-packages/datapackage_pipelines/manager/../lib/internal/sink.py [./committees/kns_committeesession:T_0] >>> INFO :cba22646 DONE /pipelines/datapackage_pipelines_knesset/processors/dump_to_path.py [./committees/kns_committeesession:T_0] >>> INFO :cba22646 DONE /pipelines/datapackage_pipelines_knesset/processors/dump_to_sql.py [./committees/kns_committeesession:T_0] >>> INFO :cba22646 DONE V ./committees/kns_committeesession {'.dpp': {'out-datapackage-url': '../data/committees/kns_committeesession/datapackage.json'}, 'bytes': None, 'count_of_rows': 74409, 'dataset_name': '_', 'hash': '69ff9c2cc04646502e81a5dc795f85ea'} [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual_positions.csv HTTP/1.1" 200 7154433 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual.csv HTTP/1.1" 200 233706 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (2): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual.csv HTTP/1.1" 200 233706 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/kns_knessetdates.csv HTTP/1.1" 200 14303 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (2): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/kns_knessetdates.csv HTTP/1.1" 200 14303 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual_names.csv HTTP/1.1" 200 50721 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (2): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual_names.csv HTTP/1.1" 200 50721 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual_factions.csv HTTP/1.1" 200 184671 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (2): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual_factions.csv HTTP/1.1" 200 184671 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual_faction_chairpersons.csv HTTP/1.1" 200 5484 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual_faction_chairpersons.csv HTTP/1.1" 200 5484 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual_committees.csv HTTP/1.1" 200 1019784 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (2): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual_committees.csv HTTP/1.1" 200 1019784 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual_govministries.csv HTTP/1.1" 200 99302 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (2): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/mk_individual_govministries.csv HTTP/1.1" 200 99302 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/factions.csv HTTP/1.1" 200 21670 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (2): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/factions.csv HTTP/1.1" 200 21670 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (1): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/faction_memberships.csv HTTP/1.1" 200 126687 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :Starting new HTTP connection (2): storage.googleapis.com:80 [./members/mk_individual:T_1] >>> INFO :load_resource: DEBUG :http://storage.googleapis.com:80 "GET /knesset-data-pipelines/data/members/mk_individual/faction_memberships.csv HTTP/1.1" 200 126687 [./members/mk_individual:T_1] >>> INFO :load_resource: INFO :Processed 18342 rows [./members/mk_individual:T_1] >>> INFO :knesset.dump_to_path: INFO :Processed 18342 rows [./members/mk_individual:T_1] >>> INFO :knesset.dump_to_sql: INFO :Processed 18342 rows [./members/mk_individual:T_1] >>> INFO :e2d6f365 DONE /usr/local/lib/python3.6/site-packages/datapackage_pipelines/specs/../lib/load_resource.py [./members/mk_individual:T_1] >>> INFO :e2d6f365 DONE /usr/local/lib/python3.6/site-packages/datapackage_pipelines/manager/../lib/internal/sink.py [./members/mk_individual:T_1] >>> INFO :e2d6f365 DONE /pipelines/datapackage_pipelines_knesset/processors/dump_to_path.py [./members/mk_individual:T_1] >>> INFO :e2d6f365 DONE /pipelines/datapackage_pipelines_knesset/processors/dump_to_sql.py [./members/mk_individual:T_1] >>> INFO :e2d6f365 DONE V ./members/mk_individual {'.dpp': {'out-datapackage-url': '../data/members/mk_individual/datapackage.json'}, 'bytes': None, 'count_of_rows': 18342, 'dataset_name': '_', 'hash': 'd94b918b09316158156804fda0cbb854'} INFO :RESULTS: INFO :SUCCESS: ./committees/kns_committeesession {'bytes': None, 'count_of_rows': 74409, 'dataset_name': '_', 'hash': '69ff9c2cc04646502e81a5dc795f85ea'} INFO :SUCCESS: ./members/mk_individual {'bytes': None, 'count_of_rows': 18342, 'dataset_name': '_', 'hash': 'd94b918b09316158156804fda0cbb854'} ###Markdown Inspect the source dataChoose a committee session ID to focus on, make sure it has all the fields ###Code CommitteeSessionID = 2059313 from dataflows import Flow, load, printer, filter_rows committeesession_data = Flow( load('/pipelines/data/committees/kns_committeesession/datapackage.json'), filter_rows(lambda row: row['CommitteeSessionID'] == CommitteeSessionID), printer(tablefmt='html') ).results() ###Output _____no_output_____ ###Markdown Download the protocol text ###Code import os text_url = 'https://storage.googleapis.com/knesset-data-pipelines/data/committees/meeting_protocols_text/{}'.format(committeesession_data[0][0][0]['text_parsed_filename']) filename = '/pipelines/data/committees/meeting_protocols_text/{}'.format(committeesession_data[0][0][0]['text_parsed_filename']) os.makedirs(os.path.dirname(filename), exist_ok=True) cmd = 'curl {} > {}'.format(text_url, filename) !{cmd} ###Output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 190k 100 190k 0 0 285k 0 --:--:-- --:--:-- --:--:-- 285k ###Markdown Modify the pipeline yaml to run on the selected committee session IDUnder `committee-meeting-attendees:` set the following to parse a single meeting (+add cache):``` - run: filter cache: true parameters: resources: kns_committeesession in: - CommitteeSessionID: 2068104 - run: committee_meeting_attendees parameters: filter-meeting-id: [2068104]``` Delete the cache hash and run the pipeline ###Code %%bash cd /pipelines rm -rf data/people/committees/meeting-attendees/cache_hash KNESSET_DATASERVICE_INCREMENTAL= \ dpp run --verbose \ ./people/committee-meeting-attendees ###Output [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 RUNNING ./people/committee-meeting-attendees [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 Collecting dependencies [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 Running async task [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 Waiting for completion [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 Async task starting [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 Searching for existing caches [./people/committee-meeting-attendees:T_0] >>> INFO :Found cache for step 3: filter [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 Building process chain: [./people/committee-meeting-attendees:T_0] >>> INFO :- cache_loader [./people/committee-meeting-attendees:T_0] >>> INFO :- committee_meeting_attendees [./people/committee-meeting-attendees:T_0] >>> INFO :- join_committee_meeting_attendees_mks [./people/committee-meeting-attendees:T_0] >>> INFO :- knesset.dump_to_path [./people/committee-meeting-attendees:T_0] >>> INFO :- knesset.dump_to_sql [./people/committee-meeting-attendees:T_0] >>> INFO :- (sink) [./people/committee-meeting-attendees:T_0] >>> INFO :committee_meeting_attendees: INFO :getting attendees for meeting 2059313 [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 DONE /usr/local/lib/python3.6/site-packages/datapackage_pipelines/specs/../lib/cache_loader.py [./people/committee-meeting-attendees:T_0] >>> INFO :committee_meeting_attendees: INFO :Processed 1016 rows [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 DONE /pipelines/people/committee_meeting_attendees.py [./people/committee-meeting-attendees:T_0] >>> INFO :join_committee_meeting_attendees_mks: INFO :Processed 1 rows [./people/committee-meeting-attendees:T_0] >>> INFO :knesset.dump_to_path: INFO :Processed 1 rows [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 DONE /pipelines/people/join_committee_meeting_attendees_mks.py [./people/committee-meeting-attendees:T_0] >>> INFO :knesset.dump_to_sql: INFO :Processed 1 rows [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 DONE /pipelines/datapackage_pipelines_knesset/processors/dump_to_path.py [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 DONE /usr/local/lib/python3.6/site-packages/datapackage_pipelines/manager/../lib/internal/sink.py [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 DONE /pipelines/datapackage_pipelines_knesset/processors/dump_to_sql.py [./people/committee-meeting-attendees:T_0] >>> INFO :8a208434 DONE V ./people/committee-meeting-attendees {'.dpp': {'out-datapackage-url': '../data/people/committees/meeting-attendees/datapackage.json'}, 'bytes': None, 'count_of_rows': 1, 'dataset_name': '_', 'hash': 'b930d619b391d8f667d60cecc2a95243'} INFO :RESULTS: INFO :SUCCESS: ./people/committee-meeting-attendees {'bytes': None, 'count_of_rows': 1, 'dataset_name': '_', 'hash': 'b930d619b391d8f667d60cecc2a95243'} ###Markdown Inspect the data ###Code from dataflows import Flow, load, printer Flow( load('/pipelines/data/people/committees/meeting-attendees/datapackage.json'), printer(tablefmt='html') ).process() ###Output _____no_output_____
M220P/mflix-python/notebooks/deletes.ipynb
###Markdown Your First Delete As usual, we'll import MongoClient and set up our connection uri. ###Code from pymongo import MongoClient uri = "mongodb+srv://m220-user:[email protected]/test" ###Output _____no_output_____ ###Markdown And then intialization our connection and get back a MongoClient object. ###Code client = MongoClient(uri) ###Output _____no_output_____ ###Markdown Since we're learning about deletes in this lesson and don't want to work with any of our production data, we'll define a new database and collection name to work with. ###Code lessons = client.lessons deletes = lessons.deletes ###Output _____no_output_____ ###Markdown Now that we have a collection object named **deletes** with no data in it, let's insert some data.We'll insert 100 documents with an **_id** that ranges from 0 to 99, and a field called **random_bool** that will randomly be true or false. We'll run an assertion stating that we expect 100 ObjectIds to have been inserted. If this isn't true we'll see an error.We've added the drop method at the beginning of this code cell to ensure repeatability in case we want to run through this lesson again. ###Code import random random.seed(42) deletes.drop() imr = deletes.insert_many([{'_id': val, 'random_bool': random.choice([True, False])} for val in range(100)]) assert len(imr.inserted_ids) == 100 ###Output _____no_output_____ ###Markdown Ok, let's grab the first 3 documents to get a sense for what they look like. ###Code list(deletes.find().limit(3)) ###Output _____no_output_____ ###Markdown Ok, we're convinced that we have a fairly random **random_bool** field and an **_id** with values between 0 and 99.We've learned how to create, read, and update documents. Now to delete.**pymongo** offers two idiomatic delete methods, **delete_one** and **delete_many**. Let's look at them both to get a sense for how they work. delete_one `delete_one` is a lot like `find_one`. It takes a predicate to match the document you want to delete, finds the document, and deletes it. If multiple documents match the predicate, `delete_one` will only delete the first document matched.Let's use `delete_one` to delete the first document where **random_bool** is True. Based on what I said, we should be left with 99 documents in the collection. We'll assign the DeleteResult object to the variable **dr** so we can print out the **deleted_count** property which tells us how many documents were deleted. ###Code dr = deletes.delete_one({'random_bool': True}) dr.deleted_count ###Output _____no_output_____ ###Markdown `delete_one` can be thought of like a precision scalpel. If we know some value or values that uniquely identify a document, we're guaranteed to only delete that document.We know the **_id** must be unique, so let's delete the document with **'_id': 99** First we'll find the document to prove it exists, then delete it, then try to find it again. We should get None back for the second find. ###Code deletes.find_one({'_id': 99}) deletes.delete_one({'_id': 99}) deletes.find_one({'_id': 99}) ###Output _____no_output_____ ###Markdown delete_many Unlike `delete_one`, `delete_many` deletes all documents that match the supplied predicate. Because of this behavior, `delete_many` is a little more "dangerous".Let's first get a count of how many documents now have False and True for their **random_bool** value. Then, we'll use `delete_many` to delete **all** documents where **random_bool** is False. ###Code len(list(deletes.find({'random_bool': False}))) len(list(deletes.find({'random_bool': True}))) ###Output _____no_output_____ ###Markdown 44 documents have a **random_bool** value of False. Our deleted count should be 44, and a count on the collection should yield 54. ###Code dr = deletes.delete_many({'random_bool': False}) dr.deleted_count len(list(deletes.find({'random_bool': True}))) ###Output _____no_output_____
Lab3/Lab03_01_solution.ipynb
###Markdown Manual Calculation =1 ###Code predicted= model.predict([[0,1]]) # 0:Overcast, 1:Hot print("Predicted Value:", predicted) ###Output Predicted Value: [1] ###Markdown Manual Calculation =1 ###Code predicted= model.predict([[2,2]]) # 2:Sunny, 2:Mild print("Predicted Value:", predicted) ###Output Predicted Value: [1] ###Markdown Manual Calculation near:0.4236 weather=rainy and temp= hot ###Code predicted= model.predict([[1,1]]) # 1:Rainy, 1:hot print("Predicted Value:", predicted) ###Output Predicted Value: [1]
Decision_tree/dt_classifier/.ipynb_checkpoints/decision_tree_classifier-checkpoint.ipynb
###Markdown Decision |Tree Classifier has same concept as Decision Tree Regression except we will be predicting Dicrete value instead of Continious value(in Regression).(Concept of Decision Tree is explained) ###Code import pandas as pd titanic_url_path = 'https://raw.githubusercontent.com/coding-blocks-archives/machine-learning-online-2018/master/13.%20Decision%20Trees/titanic.csv' titanic_dataFrame = pd.read_csv(titanic_url_path) titanic_dataFrame.head() ###Output _____no_output_____ ###Markdown Data Processing.Dropping unnecessary columns ###Code to_drop = ['PassengerId', 'Name', 'Parch',q 'Ticket', 'Cabin', 'Embarked'] titanic_dataFrame.drop(columns=to_drop, inplace=True) titanic_dataFrame.head(10) ###Output _____no_output_____ ###Markdown Filling empty values for age row with MEAN of age ###Code titanic_dataFrame['Age'].fillna(titanic_dataFrame['Age'].mean(), inplace=True) titanic_dataFrame.head(10) ###Output _____no_output_____ ###Markdown Replacing Sex(male,female) into Sex(0,1) ###Code titanic_dataFrame.replace(['female','male'], [0,1], inplace=True) titanic_dataFrame.head() ###Output _____no_output_____ ###Markdown Dividing dataset into Dependent(y) and Independent(X) variables ###Code X, y = titanic_dataFrame.iloc[:, 1:], titanic_dataFrame.iloc[:, :1] ###Output _____no_output_____ ###Markdown Splitting into Training and Test set ###Code from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1) ###Output _____no_output_____ ###Markdown Creating the Decision Tree Classification model Fitting Decision Tree Classification to the Training set ###Code from sklearn.tree import DecisionTreeClassifier titanic_dt = DecisionTreeClassifier(criterion='entropy', random_state=1, max_depth=3) titanic_dt.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown Visualizing the Decision Tree ###Code from sklearn.externals.six import StringIO from IPython.display import Image from sklearn.tree import export_graphviz import pydotplus dot_data = StringIO() export_graphviz(titanic_dt, out_file=dot_data, filled=True, rounded=True, special_characters=True, feature_names=X_train.columns) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) Image(graph.create_png()) ###Output C:\Users\User\Anaconda3\lib\site-packages\sklearn\externals\six.py:31: DeprecationWarning: The module is deprecated in version 0.21 and will be removed in version 0.23 since we've dropped support for Python 2.7. Please rely on the official version of six (https://pypi.org/project/six/). "(https://pypi.org/project/six/).", DeprecationWarning) ###Markdown Interpreting the above image:1st row is the name of the Independent variable and condition2nd row is entropy(TBD)3rd(samples) row is number of values that fall under that node4th(value) row tells us how many values (in this data) survived given the condition(1 row) Entropyis measure of uncertainty or disorder (also called "purity"). e.g: The higher the entropy the more the disorder we have.So, we always should try to reduce the disorder(entropy) to get a better modelFor more about entropy, follow the [link](https://towardsdatascience.com/entropy-how-decision-trees-make-decisions-2946b9c18c8) Now, let's evaluate the accuracy of our classifier model! ###Code y_pred = titanic_dt.predict(X_test) ###Output _____no_output_____ ###Markdown We need to compare the predicted values with the ones in Testing set(y_test).To achieve that we are going to need confusion matrix which is very helpful to evaluate the accuracy of the classifiers ###Code from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm ###Output _____no_output_____ ###Markdown Interpreting the Confusion matrix:[[X1, Y1][Y2, X2]] X1 and X2 are number of values that were predicted correctly by the model.Y1 and Y2 are number of values that were predicted incorrectly by the model Let's calculate how many values there are in Testing set and how many survived and not survived ###Code survived = 0 not_survived = 0 for row in y_test["Survived"]: #Loop throught the y_test if row==0: #0 means not_survived not_survived += 1 else: #if survived survived += 1 print("Not Survived: "+ str(not_survived)) print("Survived: "+ str(survived)) ###Output Not Survived: 153 Survived: 115 ###Markdown So, let's get back to Confusion matrix. The model predicted that 135 are not survived, but actually 153 are not survived and 153-135=18( is the model's error).The model predicted that 76 are survived, but actually 115 are survived and 115-76=39(again model's error) In total, model has predicted 18+39=57 values incorrectly and 135+76=211 correctly Let's get the RATIO of correctly predicted values in TEST SET for nowTo get the ratio we need to do the following: Correctly_predicted_values/total_number_of_values ###Code #I calculated the number of correctly predicted values up there and it is 211 and total is length of y_test 211/len(y_test) ###Output _____no_output_____ ###Markdown So, 78% of data in TEST set was predicted correctly by our model created above. Now, let's get the score of the model by its function called SCORE ###Code titanic_dt.score(X_test, y_test) ###Output _____no_output_____ ###Markdown Tadaa! It is the same value, which means Score returns the ratio of correctly predicted values which we computed from Confusion matrixHowever, main advantage of confusion matrix is only getting the score of correctly predicted values but also getting the score of each labels(not survived and survived).For score of Survived, we need to: correctly_predicted_for_survived/total_of_survivedfor score of Not Survived, we need to: correcly_predicted_for_not_survived/total_of_not_survived ###Code correctly_predicted_for_survived = 76 total_of_survived = 115 score_of_survived = correctly_predicted_for_survived/total_of_survived score_of_survived correcly_predicted_for_not_survived = 135 total_of_not_survived = 153 score_of_not_survived = correcly_predicted_for_not_survived/total_of_not_survived score_of_not_survived ###Output _____no_output_____ ###Markdown Interpretation of above are:Model prediction was accurate 66% for Survived valuesModel prediction was accurate 88% for Unsurvived values Now, let's create another model and tune the parameters ###Code titanic_dt_2 = DecisionTreeClassifier(criterion='gini', random_state=1, max_depth=5) titanic_dt_2.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown Gini is another way of calculating impurity(like entropy), both are almost same and below is the difference in formulas: Let's predict and store the variable to compare with the y_test in Confusion matrix ###Code y_pred_2 = titanic_dt_2.predict(X_test) cm_2 = confusion_matrix(y_test, y_pred_2) cm_2 ###Output _____no_output_____ ###Markdown Now, we have 38+22=60 values predicted incorrectly. Which means that Entropy is a bit better than Gini criterion Conclusion: Our first model performs better than second one, because criterion=entropy gives better predictions for our data project1 1) Calculate score of survived for the second model 2) Calculate score of not survived for the second model 3) Explain and calculate the total score for the second model with Confusion matrix 4) Use Score function of the second model to get the score and compare with previous question 5) Visualize the Decision Tree for the second model 6) Create a third Decision Tree with entropy and depth to be 5 7) Create a confusion matrix for the third model 8) Calculate the score of the model with the Confusion Matrix and then compute the score with model's function 9) Compare the scores and make sure they are same 10) Which one of these three models is the best?(Why? As Conclusion) Answer project 1 Questions from 1-10 are similar to what we have done project2 1) Import a dataset called "Social_Network_Ads.csv" 2)Split dataset into Dependant Variable & Indipendant Variable 3)Normilize the dataset(Gender column) 4)Split into DV&IV into Train and Test sets 5)Create Decision Tree Classifier with max_depth=3 and fit the Train set 6)Visualize the Tree 7)Get the Confusion Matrix for the model 8)Tune the parameters of model and create another one 9)Visualize the Tree and get the Confusion Matrix for the model 10)Compare both models and Conclude which one performed better answer project2 ###Code #Question1 import pandas as pd social_dataset = pd.read_csv('Social_Network_Ads.csv') social_dataset.head() #Question2 X, y = social_dataset.iloc[:, 1:4], social_dataset.iloc[:, 4] #Question3 from sklearn.preprocessing import LabelEncoder label_enc = LabelEncoder() X['Gender'] = label_enc.fit_transform(X['Gender']) #Question4 from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1) #Question5 from sklearn.tree import DecisionTreeClassifier social_dt = DecisionTreeClassifier(criterion='entropy', random_state=1, max_depth=3) social_dt.fit(X_train, y_train) #Question6 from sklearn.externals.six import StringIO from IPython.display import Image from sklearn.tree import export_graphviz import pydotplus dot_data = StringIO() export_graphviz(social_dt, out_file=dot_data, filled=True, rounded=True, special_characters=True, feature_names=X_train.columns) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) Image(graph.create_png()) #Question7 y_pred = social_dt.predict(X_test) from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm #Error rate error_rate = (3+14)/(58+45+3+14) error_rate #Question 8 #Question 9 - Visualize the same way as before social_dt_2 = DecisionTreeClassifier(criterion='entropy', random_state=1, max_depth=5) social_dt_2.fit(X_train, y_train) y_pred_2 = social_dt_2.predict(X_test) cm_2 = confusion_matrix(y_test, y_pred_2) cm_2 cm_2[0,1] #Error rate 17 / cm_2.sum() ###Output _____no_output_____ ###Markdown Question10Conclusion: Both models perform similar and error rate of both of models are same, but we know(if you did not, now you do) that False Negative is much worse than False Positive(in most cases, depending on data) So, first model has 14 False Negative predictions and second model has 9 False Negatives. Therefore, we must choose second model as it has less False Negatives project 3 1) Import a dataset called "diabets.csv"2)use diabet.describe to see the statistic of data3)Replacing diabetes(False,True) into diabetes(0,1).hint:you can use this code for changing the value of true and false to 1 and 0: diabet.applymap(lambda x: 1 if x == True else x)4)Dividing dataset into Dependent(y) and Independent(X)variables5)Splitting into Training and Test set: 70% train and 30% test6)Create Decision Tree Classifier with max_depth=5 and fit the Train set7)Visualize the Tree8)predict y value for test data9)Get the Confusion Matrix for the model10)find the score of prdiction model11)Tune the parameters of model and create another one(hint:use gini)12)Compare both models and Conclude which one performed better answer project 3 ###Code #1 import pandas as pd diabet = pd.read_csv('diabetes.csv') diabet.head(10) #2 diabet.describe() #3 Replacing diabetes(False,True) into diabetes(0,1) diabet = diabet.applymap(lambda x: 1 if x == True else x) diabet = diabet.applymap(lambda x: 0 if x == False else x) diabet.head() #4 Dividing dataset into Dependent(y:diabetes) and Independent(X:pregnancies,plasma,Age,blood pressure,triceps skin thickness,insulin,bmi,diabetes pedigree,age) variables X, y = diabet.iloc[:, :8], diabet.iloc[:, 8:] #5 Splitting into Training and Test set: 70% train(768*0.7=538) and test(768-538=230) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1) #6 Creating the Decision Tree Classification model #Fitting Decision Tree Classification to the Training set from sklearn.tree import DecisionTreeClassifier diabet_dt = DecisionTreeClassifier(criterion='entropy', random_state=1, max_depth=5) diabet_dt.fit(X_train, y_train) #7 Visualizing the Decision Tree !pip install pydotplus from sklearn.externals.six import StringIO from IPython.display import Image from sklearn.tree import export_graphviz import pydotplus dot_data = StringIO() export_graphviz(diabet_dt, out_file=dot_data, filled=True, rounded=True, special_characters=True, feature_names=X_train.columns) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) Image(graph.create_png()) ###Output Requirement already satisfied: pydotplus in c:\users\pc\appdata\local\enthought\canopy\edm\envs\user\lib\site-packages (2.0.2) Requirement already satisfied: pyparsing>=2.0.1 in c:\users\pc\appdata\local\enthought\canopy\edm\envs\user\lib\site-packages (from pydotplus) (2.2.0) ###Markdown Interpreting the above image:1st row is the name of the Independent variable and condition2nd row is entropy(TBD)3rd(samples) row is number of values that fall under that node4th(value) row tells us how many values (in this data) survived given the condition(1 row) ###Code #8 y prediction y_pred = diabet_dt.predict(X_test) #9 confusion matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm #10 diabet = 0 diabet = 0 not_diabet = 0 for row in y_test["diabetes"]: #Loop throught the y_test if row==0: #0 means not_diabet not_diabet += 1 else: #if diabet diabet += 1 print("Not diabet: "+ str(not_diabet)) print("diabet: "+ str(diabet)) ###Output Not diabet: 146 diabet: 85 ###Markdown So, let's get back to Confusion matrix. The model predicted that 121 are not diabet, but actually 146 are not diabet and 146-121=25( is the model's error).The model predicted that 56 are diabet, but actually 85 are got diabet and 85-56=29(again model's error)In total, model has predicted 25+29=54 values incorrectly and 121+56=177 correctlyLet's get the RATIO of correctly predicted values in TEST SET for nowTo get the ratio we need to do the following: Correctly_predicted_values/total_number_of_values ###Code #I calculated the number of correctly predicted values up there and it is 177 and total is length of y_test 177/len(y_test) ###Output _____no_output_____ ###Markdown So, 77% of data in TEST set was predicted correctly by our model created above. Now, let's get the score of the model by its function called SCORE ###Code diabet_dt.score(X_test, y_test) ###Output _____no_output_____ ###Markdown Tadaa! It is the same value, which means Score returns the ratio of correctly predicted values which we computed from Confusion matrixHowever, main advantage of confusion matrix is only getting the score of correctly predicted values but also getting the score of each labels(not diabet and diabet).For score of diabet, we need to: correctly_predicted_for_diabet/total_of_diabetfor score of Not diabet, we need to: correctly_predicted_for_not_diabet/total_of_not_diabet ###Code correctly_predicted_for_diabet = 56 total_of_diabet = 85 score_of_diabet = correctly_predicted_for_diabet/total_of_diabet score_of_diabet correctly_predicted_for_not_diabet = 121 total_of_not_diabet = 146 score_of_not_diabet = correctly_predicted_for_not_diabet/total_of_not_diabet score_of_not_diabet ###Output _____no_output_____ ###Markdown correctly_predicted_for_not_diabet = 121total_of_not_diabet = 146score_of_not_diabet = correctly_predicted_for_not_diabet/total_of_not_diabetscore_of_not_diabet ###Code #11 diabet_dt_2 = DecisionTreeClassifier(criterion='gini', random_state=1, max_depth=5) diabet_dt_2.fit(X_train, y_train) y_pred_2 = diabet_dt_2.predict(X_test) #12 cm_2 = confusion_matrix(y_test, y_pred_2) cm_2 ###Output _____no_output_____
Modulation_classification.ipynb
###Markdown ###Code %%capture !pip install tensorflow-gpu==2.0.0 import numpy as np import matplotlib.pyplot as plt import pickle import tensorflow as tf tf.__version__ from google.colab import drive drive.mount('/gdrive') !ls '/gdrive/My Drive/Colab Notebooks' with open('/gdrive/My Drive/Colab Notebooks/RML2016.10a_dict.pkl', 'rb') as open_file: u = pickle._Unpickler( open_file ) u.encoding = 'latin1' Xd = u.load() # a dict with 220 keys of the form ('modulation_name', snr_value) where modulation_name is in {'8PSK','AM-DSB', 'AM-SSB', 'BPSK', 'CPFSK', 'GFSK', 'PAM4', 'QAM16', 'QAM64', 'QPSK', 'WBFM'} (11 mod) and snr_value is in {-20, -18, ..., 18} (20 snr_value) snrs,mods = map(lambda j: sorted(list(set(map(lambda x: x[j], Xd.keys())))), [1,0]) X = [] lbl = [] for mod in mods: for snr in snrs: X.append(Xd[(mod,snr)]) for i in range(Xd[(mod,snr)].shape[0]): lbl.append((mod,snr)) # Doubt: Only one modulation and one snr is labelled X = np.vstack(X) mods.index('8PSK') print(X.shape) print(X[0,:].shape) np.random.seed(2016) n_examples = X.shape[0] n_train = int(n_examples * 0.5) train_idx = np.random.choice(range(0,n_examples), size=n_train, replace=False) test_idx = list(set(range(0,n_examples))-set(train_idx)) X_train = X[train_idx] X_test = X[test_idx] print(train_idx) def to_onehot(yin): yy = list(yin) # Doubt: Is list required yy1 = np.zeros([len(list(yy)), max(yy)+1]) yy1[np.arange(len(list(yy))),yy] = 1 return yy1 Y_train = to_onehot(map(lambda x: mods.index(lbl[x][0]), train_idx)) # Doubt Y_test = to_onehot(map(lambda x: mods.index(lbl[x][0]), test_idx)) Y_test[0] X_train.shape[1:] train_SNRs = list(map(lambda x: lbl[x][1], train_idx)) print(train_SNRs) train_Y_0 = Y_train[np.where(np.array(train_SNRs)== 12)] print(train_Y_0) in_shp = list(X_train.shape[1:]) # Doubt: Why list classes = mods print (in_shp) print('X_train.shape:', X_train.shape) print('Y_train.shape:', Y_train.shape) print(X.shape) print(len(lbl)) print('Y_train[4:9]: \n', Y_train[4:9]) print('train_idx.shape:', train_idx.shape) print('train_idx[4:9]:', train_idx[4:9]) print(lbl[train_idx[4]], lbl[train_idx[5]], lbl[train_idx[6]], lbl[train_idx[7]], lbl[train_idx[8]], lbl[train_idx[9]]) print('classes:', classes) # The CNN Network list(X_train.shape[1:]) in_shp = list(X_train.shape[1:]) dr = 0.5 # dropout rate (%) model = tf.keras.models.Sequential() model.add(tf.keras.layers.Reshape([1]+in_shp, input_shape=in_shp)) model.add(tf.keras.layers.ZeroPadding2D((0,2),data_format='channels_first')) model.add(tf.keras.layers.Convolution2D(256,(1,3), padding='valid', activation="relu", name="conv1",data_format='channels_first', kernel_initializer='glorot_uniform')) model.add(tf.keras.layers.Dropout(dr)) model.add(tf.keras.layers.ZeroPadding2D((0,2),data_format='channels_first')) model.add(tf.keras.layers.Convolution2D(80,(2,3), padding="valid", activation="relu", name="conv2", data_format='channels_first', kernel_initializer='glorot_uniform')) model.add(tf.keras.layers.Dropout(dr)) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(256, activation='relu', kernel_initializer='he_normal', name="dense1")) model.add(tf.keras.layers.Dropout(dr)) model.add(tf.keras.layers.Dense(len(classes), kernel_initializer='he_normal', name="dense2" )) model.add(tf.keras.layers.Activation('softmax')) # Doubt: activation could have been inside in the previous line model.add(tf.keras.layers.Reshape([len(classes)])) # Doubt: Why this reshape model.compile(loss='categorical_crossentropy', optimizer='adam') model.summary() from google.colab import drive drive.mount('/content/drive') !ls ./drive in_shp # Set up some params nb_epoch = 100 # number of epochs to train on batch_size = 1024 # training batch size filepath = "./drive/My Drive/new_model.h5" callbacks = [ tf.keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=True, mode='auto'), tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, verbose=0, mode='auto') ] # perform training ... history = model.fit(X_train, Y_train, batch_size=batch_size, epochs=nb_epoch, verbose=2, validation_data=(X_test, Y_test), callbacks = callbacks) #evaluate(x=None, y=None, batch_size=None, verbose=1, sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False) score = model.evaluate(X_test, Y_test, verbose=1, batch_size=batch_size) print(score) # Show loss curves plt.figure() plt.title('Training performance') plt.plot(history.epoch, history.history['loss'], label='train loss+error') plt.plot(history.epoch, history.history['val_loss'], label='val_error') plt.legend() ###Output _____no_output_____
spark/recommendation_system/basic_recommendation_system.ipynb
###Markdown Environment Preparation ###Code import os import sys # set environment variable os.environ['SPARK_HOME'] = "/home/bin_yin/tmp/spark-2.0.2-bin-hadoop2.7/" os.environ['PYSPARK_SUBMIT_ARGS'] = "--master local[2] pyspark-shell" # we can check by os.environ.get("SPARK_HOME") # Init sc import findspark findspark.init() import pyspark sc = pyspark.SparkContext(appName="myAppName") ###Output _____no_output_____ ###Markdown 1. Data Description **u.user**u'1|24|M|technician|85711'* [0]: users id* [1]: age* [2]: gender* [3]: occupation* [4]: zipcode**u.item**u'1|Toy Story (1995)|01-Jan-1995||http://us.imdb.com/M/title-exact?Toy%20Story%20(1995)|0|0|0|1|1|1|0|0|0|0|0|0|0|0|0|0|0|0|0'* [0]: movie id* [1]: movie title* [2]: release date* [3]: video release date* [4]: IMDb URL* unknown | Action | Adventure | Animation | Children's | Comedy | Crime | Documentary | Drama | Fantasy | Film-Noir | Horror | Musical | Mystery | Romance | Sci-Fi | Thriller | War | Western**u.data**u'196\t242\t3\t881250949'* [0]: user id* [1]: item id* [2]: rating* [3]: timestamp 2. Train & Predict ###Code # get data raw_data = sc.textFile('ml-100k/u.data') raw_ratings = raw_data.map(lambda line:line.split('\t')) ratings = raw_ratings.map( lambda f:( int(f[0]), int(f[1]), float(f[2])) ) ratings.first() # 用户ID,电影ID,得分 # check ratings data import numpy as np np.array(ratings.collect()).shape# 矩阵维度:(100000, 3) %%time # train model from pyspark.mllib.recommendation import ALS model = ALS.train(ratings, rank=50, iterations=10, lambda_=0.01) # 核心原理就是把ratings矩阵分解(协同过滤),类似为每个用户训练一个线性回归模型 # rank:线性模型维度 # predict model.predict(789, 123) # 用户789对电影123的打分是3.107 ###Output _____no_output_____ ###Markdown 3. TOP-K ###Code # TOP-10 model.recommendProducts(789, 10) # get top-10 movide id for user 789 rts = model.recommendProducts(789, 10) # get top-10 movide id for user 789 movie_ids = [r[1] for r in rts] movie_ids # what is Top-10 movie name? movie_raw_data = sc.textFile('ml-100k/u.item') movies_by_id = movie_raw_data.map(lambda line: line.split('|')).map(lambda x:(x[0],x[1])).collect() for m in movies_by_id: k = m[0] v = m[1] if(int(k) in movie_ids): print(v) ###Output Taxi Driver (1976) Ed Wood (1994) Pulp Fiction (1994) Die Hard (1988) GoodFellas (1990) Godfather: Part II, The (1974) Raging Bull (1980) Breakdown (1997) Some Like It Hot (1959) Annie Hall (1977) ###Markdown 4. Details ###Code moviesForUser = ratings.keyBy(lambda f:f[0]).lookup(789) # UserID=789 的用户评价过的电影 moviesForUser # UserID, ProductID, Rating len(moviesForUser)# 用户评价过33部电影 # what is movies name? movies_ids_for_user = [x[1] for x in moviesForUser] movie_raw_data = sc.textFile('ml-100k/u.item') movies_by_id = movie_raw_data.map(lambda line: line.split('|')).map(lambda x:(x[0],x[1])).collect() for m in movies_by_id: k = m[0] v = m[1] if(int(k) in movies_ids_for_user): print('{0}: {1}'.format(k,v)) # 他对电影Dead Man Walking (1995)(MovieID=9)的评分是5.0 for x in moviesForUser: if(x[1]==9): print(x[2]) ###Output 5.0 ###Markdown **Analysis**: 这里的电影就是这个用户评价过的电影。这个用户对Dead Man Walking (1995)(MovieID=9)的评分是5.0,说明他很喜欢这部电影。而我们给他推荐的TOP-10中,也有一部电影跟这个很接近:Die Hard (1988)。说明推荐还是有效果的 ###Code sc.stop()# Stop Spark ###Output _____no_output_____
pandas basics/Pandas Demo 1.ipynb
###Markdown To read csv file from storage ###Code df = pd.read_csv("data/survey_results_public.csv") ###Output _____no_output_____ ###Markdown Printing just data frame ###Code df ###Output _____no_output_____ ###Markdown Printing head section (first rows) of fata-frame (default n=5) ###Code df.head() ###Output _____no_output_____ ###Markdown Printing head section (first rows) with varible no of rows (n) of fata-frame ###Code df.head(10) ###Output _____no_output_____ ###Markdown Printing tail section (last rows) of fata-frame (default n=5) ###Code df.tail() ###Output _____no_output_____ ###Markdown Printing tail section (last rows) with varible no of rows (n) of fata-frame ###Code df.tail(10) df.shape ###Output _____no_output_____ ###Markdown display column names and their respective data types ###Code df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 88883 entries, 0 to 88882 Data columns (total 85 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Respondent 88883 non-null int64 1 MainBranch 88331 non-null object 2 Hobbyist 88883 non-null object 3 OpenSourcer 88883 non-null object 4 OpenSource 86842 non-null object 5 Employment 87181 non-null object 6 Country 88751 non-null object 7 Student 87014 non-null object 8 EdLevel 86390 non-null object 9 UndergradMajor 75614 non-null object 10 EduOther 84260 non-null object 11 OrgSize 71791 non-null object 12 DevType 81335 non-null object 13 YearsCode 87938 non-null object 14 Age1stCode 87634 non-null object 15 YearsCodePro 74331 non-null object 16 CareerSat 72847 non-null object 17 JobSat 70988 non-null object 18 MgrIdiot 61159 non-null object 19 MgrMoney 61157 non-null object 20 MgrWant 61232 non-null object 21 JobSeek 80555 non-null object 22 LastHireDate 79854 non-null object 23 LastInt 67155 non-null object 24 FizzBuzz 71344 non-null object 25 JobFactors 79371 non-null object 26 ResumeUpdate 77877 non-null object 27 CurrencySymbol 71392 non-null object 28 CurrencyDesc 71392 non-null object 29 CompTotal 55945 non-null float64 30 CompFreq 63268 non-null object 31 ConvertedComp 55823 non-null float64 32 WorkWeekHrs 64503 non-null float64 33 WorkPlan 68914 non-null object 34 WorkChallenge 68141 non-null object 35 WorkRemote 70284 non-null object 36 WorkLoc 70055 non-null object 37 ImpSyn 71779 non-null object 38 CodeRev 70390 non-null object 39 CodeRevHrs 49790 non-null float64 40 UnitTests 62668 non-null object 41 PurchaseHow 61108 non-null object 42 PurchaseWhat 62029 non-null object 43 LanguageWorkedWith 87569 non-null object 44 LanguageDesireNextYear 84088 non-null object 45 DatabaseWorkedWith 76026 non-null object 46 DatabaseDesireNextYear 69147 non-null object 47 PlatformWorkedWith 80714 non-null object 48 PlatformDesireNextYear 77443 non-null object 49 WebFrameWorkedWith 65022 non-null object 50 WebFrameDesireNextYear 62944 non-null object 51 MiscTechWorkedWith 59586 non-null object 52 MiscTechDesireNextYear 64511 non-null object 53 DevEnviron 87317 non-null object 54 OpSys 87851 non-null object 55 Containers 85366 non-null object 56 BlockchainOrg 48175 non-null object 57 BlockchainIs 60165 non-null object 58 BetterLife 86269 non-null object 59 ITperson 87141 non-null object 60 OffOn 86663 non-null object 61 SocialMedia 84437 non-null object 62 Extraversion 87305 non-null object 63 ScreenName 80486 non-null object 64 SOVisit1st 83877 non-null object 65 SOVisitFreq 88263 non-null object 66 SOVisitTo 88086 non-null object 67 SOFindAnswer 87816 non-null object 68 SOTimeSaved 86344 non-null object 69 SOHowMuchTime 68378 non-null object 70 SOAccount 87828 non-null object 71 SOPartFreq 74692 non-null object 72 SOJobs 88066 non-null object 73 EntTeams 87841 non-null object 74 SOComm 88131 non-null object 75 WelcomeChange 85855 non-null object 76 SONewContent 69560 non-null object 77 Age 79210 non-null float64 78 Gender 85406 non-null object 79 Trans 83607 non-null object 80 Sexuality 76147 non-null object 81 Ethnicity 76668 non-null object 82 Dependents 83059 non-null object 83 SurveyLength 86984 non-null object 84 SurveyEase 87081 non-null object dtypes: float64(5), int64(1), object(79) memory usage: 57.6+ MB ###Markdown Set maximum columns to show when to display dataframe ###Code pd.set_option('display.max_columns', 85) ###Output _____no_output_____ ###Markdown Set maximum rows to show when to display dataframe ###Code pd.set_option('display.max_rows', 100) schema_df = pd.read_csv("data/survey_results_schema.csv") schema_df ###Output _____no_output_____
UDEMY_Datavis_Python/08 - jupyter widgets/Erste Schritte.ipynb
###Markdown Jupyter WidgetsJupyter Widgets erlauben dir, interaktive Notebooks zu erstellen. Richtig cool, weil man dann einfach mit Parametern rumspielen kann :)Btw, die Dokumentation findest du hier: http://ipywidgets.readthedocs.io/en/latest/examples/Widget%20List.html ###Code import ipywidgets as widgets ###Output _____no_output_____ ###Markdown Beispiel: IntSlider ###Code widgets.IntSlider( value=7, min=0, max=100, # step=1, description='X-Wert:', orientation='horizontal', slider_color='white' ) widgets.IntSlider( value=7, min=0, max=100, # step=1, description='X-Wert:', orientation='vertical', slider_color='white' ) ###Output _____no_output_____ ###Markdown Beispiel: FloatSlider ###Code widgets.FloatSlider( value=7, min=0, max=100, step=0.1, description='X-Wert:', orientation='horizontal', slider_color='white' ) ###Output _____no_output_____ ###Markdown Beispiel: IntRangeSlider ###Code widgets.IntRangeSlider( value=[3, 10], min=0, max=100, description='X-Wert:', orientation='horizontal', slider_color='white' ) ###Output _____no_output_____ ###Markdown Beispiel: Checkbox ###Code widgets.Checkbox( value=True, description="Sicher?" ) ###Output _____no_output_____ ###Markdown Beispiel: Dropdown ###Code widgets.Dropdown( options=["MUC", "CGN", "HEL", "BUD", "SGX"], value="MUC", description="Flughafen" ) ###Output _____no_output_____ ###Markdown Beispiel: RadioButtons ###Code widgets.RadioButtons( options=["MUC", "CGN", "HEL", "BUD", "SGX"], value="MUC", description="Flughafen" ) ###Output _____no_output_____ ###Markdown Beispiel: Text / Textarea ###Code widgets.Text( description="Name:", value="Hallo Welt" ) widgets.Textarea( description="Name:", value="Hallo Welt" ) ###Output _____no_output_____
notebooks/Homework 4.ipynb
###Markdown Homework 4: Believer-Skeptic Model ###Code from __future__ import division import ADMCode from ADMCode import visualize as vis from ADMCode import believer_skeptic import numpy as np from numpy.random import sample as rs import pandas as pd import sys import os # from ipywidgets import interactive import matplotlib.pyplot as plt import seaborn as sns import warnings # Temporary for now until push changes to PIP #sys.path.insert(0,'../ADMCode') #import believer_skeptic warnings.simplefilter('ignore', np.RankWarning) warnings.filterwarnings("ignore", module="matplotlib") warnings.filterwarnings("ignore") sns.set(style='white', font_scale=1.3) %matplotlib inline ###Output _____no_output_____ ###Markdown **Question 1:** **Answer the following questions about the relationship between the system of equations below.** See the Lab 4 notebook for definition of terms. * **Eq. 1**: Go process. $$G_{j,t}(\tau) = G_{j,t}(\tau - \Delta \tau) + \upsilon ^G _{j,t} \Delta \tau + \epsilon^G_j (\tau)$$* **Eq. 2**: No go process. $$N_{j,t}(\tau) = N_{j,t}(\tau - \Delta \tau) + \upsilon ^N _{j,t} \Delta \tau + \epsilon^N_j (\tau)$$* **Eq. 3**: Execution process. $$\theta_{j,t}(\tau) = [G_{j,t}(\tau) - N_{j,t}(\tau)] \cdot cosh(\gamma \cdot \tau)$$ **1a:** Describe the three components of Eqs. 1 & 2 in laymen's terms.* **Answer 1a:** **1b:** As time ($\tau$) progresses, how does the exponential term in Eq. 3 ($\cosh (\gamma \cdot \tau)$) influence the nature of the competition between channels?* **Answer 1b:** **Question 2:** **Answer the following questions about the relationship between the system of equations below.** * **Eq. 4**: Action value. $$q_j(t+1) = q_j(t) + \alpha \cdot [r(t) - q_j(t)]$$* **Eq. 5**: Greediness. $$p_j(t) = \frac{\exp{\beta \cdot q_j(t)}}{\Sigma^n_i \exp{\beta \cdot q_i(t)}}$$* **Eq. 6**: (Reward) prediction error. $$\delta_j(t) = p_j(t) - p_j(t-1)$$* **Eq. 7**: Update rule: $$\upsilon^{G/N}_{j,t+1} = \upsilon^{G/N}_{j,t} + \alpha^{G/N} \cdot \delta_j(t)$$ **2a:** How is the estimation of the prediction error (Eq. 6) different than the normative form of the update rule in q-learning?* **Answer 2a:** **2b:** In the Believer-Skeptic model, the Go & NoGo processes have different learning rates (i.e., $\alpha^G$ & $\alpha^N$). What biological justification is there for these two pathways having different forms of learning?* **Answer 2b:** ** Question 3: ** ###Code # Define the DDM parameters as an object to pass p={'vd':np.asarray([.7]*4), 'vi':np.asarray([.25]*4), 'a':.25, 'tr':.3, 'xb':.00005} # Learning rates on the Go (direct) and NoGo (indirect) pathways aGo=.1 aNo=.1 # Run one simulation igtData = pd.read_csv("https://github.com/CoAxLab/AdaptiveDecisionMaking_2018/blob/master/data/IGTCards.csv?raw=true") outdf, agentdf = believer_skeptic.play_IGT(p, feedback=igtData, beta=.09, nblocks=2, alphaGo=aGo, alphaNo=aNo, singleProcess=0) print(agentdf.rt.mean()) agentdf.iloc[:, :].choice.value_counts().sort_index() ###Output 0.62884 ###Markdown The Iowa Gambling task has two general metrics for estimating performance of the agent.**Payoff (P)** is the degree to which the agent chooses the High Value decks over the Low Value decks. Th\is is a measure of efficient value-based decision-making.P = $\Sigma (C + D) - \Sigma (A + B)$**Sensitivity (Q)** is the sensitivity of the agent to High Frequency rewards over Low Frequency rewards.Q = $\Sigma (B + D) - \Sigma (A + C)$(In the simulations above Deck A is choice 0, Deck B is choice 1, Deck C is choice 2, and Deck D is choice 3). **Q3:** From the agent dataframe (agentdf) run in the code cell above, calculate P & Q. ###Code # CODE FOR ANSWERING Q3 ###Output _____no_output_____ ###Markdown ** Question 4: ** ###Code # Learning rates on the Go (direct) and NoGo (indirect) pathways aGo=.1 aNo=.1 outdf, agentdf = believer_skeptic.play_IGT(p, feedback=igtData, beta=.09, nblocks=2, alphaGo=aGo, alphaNo=aNo, singleProcess=0) ## INSERT CALCULATION CODE FOR PAYOFF & SENSITIVITY FROM QUESTION 3 HERE ## TO ANSWER THE QUESTIONS BELOW ###Output _____no_output_____ ###Markdown (To answer the questions below, you may need to repeate several runs of the code above in order to see stability in Payoff & Sensitivity scores). **4a:** Change $\alpha^N$ (i.e., aNo) above to 0.025, while keeping $\alpha^G$ (i.e., aGo) at 0.1. How does this impact the Payoff and Sensitivity scores?* **Answer 4a:** **4b:** Put $\alpha^N$ (i.e., aNo) back to 0.1, while reducing $\alpha^G$ (i.e., aGo) at 0.05. How does this impact the Payoff and Sensitivity scores?* **Answer 4b:** **Bonus Problems** Full credit is only given if the instructor can run your modified code below.**BP a:** Use the process simulation code below to visualize how varying the drift rate of the Go ($v_d$) and NoGo ($v_i$) processes impacts the dynamics of the four choices. * **Bonus Answer a:** *copy/paste your modified code into a code cell below* **BP b:** Write a set of nested for-loops to simulate a set of agent runs with $\alpha^N$ values ranging from 0.025 to 0.15 (in increments of 0.005), keeping $\alpha^G$. Simulate 100 runs per value of $\alpha^N$ and report (or visualize) the average Payoff & Sensitivity score. Report how these values are impacted by different levels of $\alpha^N$. * **Bonus Answer b:** *copy/paste your modified code into a code cell below* **BP c:** Repeat the simulations from Bonus Problem b above but now increase $v_i$ to 0.5. How does this change the results?* **Bonus Answer c:** *copy/paste your modified code into a code cell below* ** Process Code ** ###Code # This is a snippet from believer_skeptic.py, specifically the # simulate_multirace function. single_process=0 si=.1 tb=1.0 dt=.001 nresp = p['vd'].size dx = si * np.sqrt(dt) nTime = np.ceil((tb-p['tr'])/dt).astype(int) xtb = believer_skeptic.temporal_dynamics(p, np.cumsum([dt]*nTime)) # Run the process model Pd = .5 * (1 + (p['vd'] * np.sqrt(dt))/si) Pi = .5 * (1 + (p['vi'] * np.sqrt(dt))/si) direct = xtb * np.where((rs((nresp, nTime)).T < Pd),dx,-dx).T indirect = np.where((rs((nresp, nTime)).T < Pi),dx,-dx).T execution = np.cumsum(direct-indirect, axis=1) act_ix, rt, rt_ix = believer_skeptic.analyze_multiresponse(execution, p) nsteps_to_rt = np.argmax((execution.T>=p['a']).T, axis=1) rts = p['tr'] + nsteps_to_rt*dt # set non responses to 999 rts[rts==p['tr']]=999 # get accumulator with fastest RT (winner) in each cond act_ix = np.argmin(rts) winner, rt=act_ix, rts[act_ix] rt_ix = np.ceil((rt-p['tr'])/dt).astype(int) actions = np.arange(nresp) losers = actions[actions!=act_ix] print(act_ix) plt.plot(execution[act_ix][:rt_ix], color='b') for l in losers: plt.plot(execution[l][:rt_ix], color='r', alpha=.3) sns.despine() ###Output 2
examples/notebooks/notebook_template.ipynb
###Markdown Uncomment the following line to install [geemap](https://geemap.org) if needed. ###Code # !pip install geemap ###Output _____no_output_____ ###Markdown Google Earth Engine Python Tutorials* GitHub: https://github.com/giswqs/geemap* Notebook examples: https://github.com/giswqs/geemap/blob/master/examples/README.mdtutorials* Video tutorials: https://www.youtube.com/playlist?list=PLAxJ4-o7ZoPccOFv1dCwvGI6TYnirRTg3**Tutorial 21 - How to export Earth Engine maps as HTML and images** Import libraries ###Code import ee import geemap ###Output _____no_output_____ ###Markdown Video tutorial on YouTube ###Code geemap.show_youtube('h0pz3S6Tvx0') ###Output _____no_output_____ ###Markdown Update the geemap packageIf you run into errors with this notebook, please uncomment the line below to update the [geemap](https://github.com/giswqs/geemapinstallation) package to the latest version from GitHub. Restart the Kernel (Menu -> Kernel -> Restart) to take effect. ###Code # geemap.update_package() ###Output _____no_output_____ ###Markdown Create an interactive map ###Code Map = geemap.Map() Map ###Output _____no_output_____ ###Markdown Google Earth Engine Python Tutorials* GitHub: https://github.com/giswqs/geemap* Notebook examples: https://github.com/giswqs/geemap/blob/master/examples/README.mdtutorials* Video tutorials: https://www.youtube.com/playlist?list=PLAxJ4-o7ZoPccOFv1dCwvGI6TYnirRTg3**Tutorial 21 - How to export Earth Engine maps as HTML and images** Import libraries ###Code import ee import geemap ###Output _____no_output_____ ###Markdown Video tutorial on YouTube ###Code geemap.show_youtube('h0pz3S6Tvx0') ###Output _____no_output_____ ###Markdown Update the geemap packageIf you run into errors with this notebook, please uncomment the line below to update the [geemap](https://github.com/giswqs/geemapinstallation) package to the latest version from GitHub. Restart the Kernel (Menu -> Kernel -> Restart) to take effect. ###Code # geemap.update_package() ###Output _____no_output_____ ###Markdown Create an interactive map ###Code Map = geemap.Map() Map ###Output _____no_output_____
EDA/2018-04-01-minfile-analysis-datestamps.ipynb
###Markdown 2018-04-01-minfile-analysis-datestamps April 1, 2018 ###Code import numpy as np import pandas as pd pd.options.display.max_columns = 1000 pd.options.display.max_rows = 1000 pd.options.display.max_colwidth = 1000 %matplotlib inline import mpl_toolkits.basemap from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt import warnings warnings.filterwarnings("ignore") ###Output _____no_output_____ ###Markdown 1 BC Minfile Analysis 1.0.1 Load and clean the data ###Code df = pd.read_csv('../data/ariasdata.csv', encoding='latin') pd.read_csv('../data/arismetadata.csv')[['Name','Description']] df[~df.com_nms.isnull()].wrk_yr.plot.hist() df[df.com_nms.isnull()].wrk_yr.plot.hist() df.head(1).T df.isnull().sum() / len(df) * 100 col_subset = ['rep_no', 'rep_yr', 'lat83', 'long83'] col_com = 'com_nms' mask = (~df[col_com].isnull()) df_min = pd.DataFrame( [cols+[comm] for cols, comms in zip(df[mask][col_subset].values.tolist(), df[mask][col_com].str.split(', ').values.tolist()) for comm in comms], columns=col_subset+[col_com]) df_min = df_min.rename(columns={'com_nms': 'commodity'}) df_min.rep_yr = df_min.rep_yr.astype(int) (df_min.rep_yr < 1900).sum() df_min = df_min[df_min.rep_yr > 1900] df_min.head() # rep no is a unique ID len(df) - df.com_nms.isnull().sum(), len(df_min.rep_no.unique()) ###Output _____no_output_____ ###Markdown 1.0.2 Analysis Minerals in the dataset ###Code df_min.commodity.value_counts(ascending=False).head(50) df_min.commodity.value_counts(ascending=False).head(20)[::-1].plot.barh() ###Output _____no_output_____ ###Markdown **Average number of minerals per site** ###Code df_min.columns df_min.groupby('rep_no').commodity.size().mean() ###Output _____no_output_____ ###Markdown **Depth distribution** ###Code df_min.elevation.hist(bins=50, label='All minerals') plt.legend() df_min[df_min.commodity=='Gold'].elevation.hist(bins=50, label='Gold') plt.legend() ###Output _____no_output_____ ###Markdown 1.0.3 Map plots ###Code def plot_map(df_min, commodity='', quality='l'): if commodity: longs = list(df_min[df_min.commodity == commodity].long83) latts = list(df_min[df_min.commodity == commodity].lat83) else: longs = list(df_min.long83) latts = list(df_min.lat83) # plot the blank world map my_map = Basemap(projection='merc', lat_0=50, lon_0=-100, resolution = quality, area_thresh = 5000.0, llcrnrlon=min(longs), llcrnrlat=min(latts), urcrnrlon=max(longs), urcrnrlat=max(latts)) # set resolution='h' for high quality # draw elements onto the world map my_map.drawcountries() #my_map.drawstates() my_map.drawcoastlines(antialiased=False, linewidth=0.005) # add coordinates as red dots x, y = my_map(longs, latts) my_map.plot(x, y, 'ro', markersize=0.5, alpha=0.5) plt.show() plot_map(df_min.groupby('rep_no').head(1)) plot_map(df_min, 'Gold') plot_map(df_min, 'Copper') plot_map(df_min, 'Cobalt') ###Output _____no_output_____ ###Markdown **1.0.4 Depoits by discovery date** ###Code df_min.groupby('rep_no').head(1).rep_yr.plot.hist(50) df_min[df_min.commodity == 'Gold'].rep_yr.plot.hist(50) gold = df_min[df_min.commodity == 'Gold'] len(gold) fig = plt.figure(figsize=(9, 8)) gold.groupby('rep_yr').size().plot.barh() gold = df_min[df_min.commodity == 'Cobalt'] len(gold) ###Output _____no_output_____ ###Markdown 1.0.3 Load and clean MINFile Minerals ###Code mf_minerals = pd.read_csv('../data/MINFILE_Minerals.csv', encoding='latin') mf_minerals.head() # Transform the data set so that commodity codes and names are now in rows instead of columns mf_minerals = pd.melt(mf_minerals, id_vars=[x for x in mf_minerals.columns if 'COMMODITY_CODE' not in x], value_vars=[x for x in mf_minerals.columns if 'COMMODITY_CODE' in x], var_name="commodity_code_ph", value_name="commodity_code") mf_minerals = pd.melt(mf_minerals, id_vars=[x for x in mf_minerals.columns if 'COMMODITY_DESCRIPTION' not in x], value_vars=[x for x in mf_minerals.columns if 'COMMODITY_DESCRIPTION' in x], var_name="commodity_ph", value_name="commodity") mf_minerals.head() mf_minerals = mf_minerals.rename(index=str, columns={ "DECIMAL_LONGITUDE": "long83", "DECIMAL_LATITUDE": "lat83" }) mf_minerals.commodity = mf_minerals.commodity.str.strip() plot_map(mf_minerals, "Gold") plot_map(mf_minerals, 'Copper') plot_map(mf_minerals, 'Cobalt') ###Output _____no_output_____ ###Markdown 2018-04-01-minfile-analysis-datestamps April 1, 2018 ###Code import numpy as np import pandas as pd pd.options.display.max_columns = 1000 pd.options.display.max_rows = 1000 pd.options.display.max_colwidth = 1000 %matplotlib inline import mpl_toolkits.basemap from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt import warnings warnings.filterwarnings("ignore") ###Output _____no_output_____ ###Markdown 1 BC Minfile Analysis 1.0.1 Load and clean the data ###Code df = pd.read_csv('../data/canada/british-columbia/ariasdata.csv', encoding='latin') pd.read_csv('../data/canada/british-columbia/arismetadata.csv')[['Name','Description']] df[~df.com_nms.isnull()].wrk_yr.plot.hist() df[df.com_nms.isnull()].wrk_yr.plot.hist() df.head(1).T df.isnull().sum() / len(df) * 100 col_subset = ['rep_no', 'rep_yr', 'lat83', 'long83'] col_com = 'com_nms' mask = (~df[col_com].isnull()) df_min = pd.DataFrame( [cols+[comm] for cols, comms in zip(df[mask][col_subset].values.tolist(), df[mask][col_com].str.split(', ').values.tolist()) for comm in comms], columns=col_subset+[col_com]) df_min = df_min.rename(columns={'com_nms': 'commodity'}) df_min.rep_yr = df_min.rep_yr.astype(int) (df_min.rep_yr < 1900).sum() df_min = df_min[df_min.rep_yr > 1900] df_min.head() # rep no is a unique ID len(df) - df.com_nms.isnull().sum(), len(df_min.rep_no.unique()) ###Output _____no_output_____ ###Markdown 1.0.2 Analysis Minerals in the dataset ###Code df_min.commodity.value_counts(ascending=False).head(50) df_min.commodity.value_counts(ascending=False).head(20)[::-1].plot.barh() ###Output _____no_output_____ ###Markdown **Average number of minerals per site** ###Code df_min.columns df_min.groupby('rep_no').commodity.size().mean() ###Output _____no_output_____ ###Markdown **Depth distribution** ###Code df_min.elevation.hist(bins=50, label='All minerals') plt.legend() df_min[df_min.commodity=='Gold'].elevation.hist(bins=50, label='Gold') plt.legend() ###Output _____no_output_____ ###Markdown 1.0.3 Map plots ###Code def plot_map(df_min, commodity='', quality='l'): if commodity: longs = list(df_min[df_min.commodity == commodity].long83) latts = list(df_min[df_min.commodity == commodity].lat83) else: longs = list(df_min.long83) latts = list(df_min.lat83) # plot the blank world map my_map = Basemap(projection='merc', lat_0=50, lon_0=-100, resolution = quality, area_thresh = 5000.0, llcrnrlon=min(longs), llcrnrlat=min(latts), urcrnrlon=max(longs), urcrnrlat=max(latts)) # set resolution='h' for high quality # draw elements onto the world map my_map.drawcountries() #my_map.drawstates() my_map.drawcoastlines(antialiased=False, linewidth=0.005) # add coordinates as red dots x, y = my_map(longs, latts) my_map.plot(x, y, 'ro', markersize=0.5, alpha=0.5) plt.show() plot_map(df_min.groupby('rep_no').head(1)) plot_map(df_min, 'Gold') plot_map(df_min, 'Copper') plot_map(df_min, 'Cobalt') ###Output _____no_output_____ ###Markdown **1.0.4 Depoits by discovery date** ###Code df_min.groupby('rep_no').head(1).rep_yr.plot.hist(50) df_min[df_min.commodity == 'Gold'].rep_yr.plot.hist(50) gold = df_min[df_min.commodity == 'Gold'] len(gold) fig = plt.figure(figsize=(9, 8)) gold.groupby('rep_yr').size().plot.barh() gold = df_min[df_min.commodity == 'Cobalt'] len(gold) ###Output _____no_output_____ ###Markdown 1.0.3 Load and clean MINFile Minerals ###Code mf_minerals = pd.read_csv('../data/canada/british-columbia/MINFILE.csv', encoding='latin') mf_minerals.head() # Transform the data set so that commodity codes and names are now in rows instead of columns mf_minerals = pd.melt(mf_minerals, id_vars=[x for x in mf_minerals.columns if 'COMMODITY_CODE' not in x], value_vars=[x for x in mf_minerals.columns if 'COMMODITY_CODE' in x], var_name="commodity_code_ph", value_name="commodity_code") mf_minerals = pd.melt(mf_minerals, id_vars=[x for x in mf_minerals.columns if 'COMMODITY_DESCRIPTION' not in x], value_vars=[x for x in mf_minerals.columns if 'COMMODITY_DESCRIPTION' in x], var_name="commodity_ph", value_name="commodity") mf_minerals.head() mf_minerals = mf_minerals.rename(index=str, columns={ "DECIMAL_LONGITUDE": "long83", "DECIMAL_LATITUDE": "lat83" }) mf_minerals.commodity = mf_minerals.commodity.str.strip() plot_map(mf_minerals, "Gold") plot_map(mf_minerals, 'Copper') plot_map(mf_minerals, 'Cobalt') ###Output _____no_output_____
content/other/dev_setup/.ipynb_checkpoints/virtual_environments-checkpoint.ipynb
###Markdown ---title: "Virtual Environments with venv"subtitle: "A cheatsheet for using venv to create, activate and use virtual environments."author: "Cam"date: 2021-01-01description: "A cheatsheet for using venv to create, activate and use virtual environments."summary: "A cheatsheet for using venv to create, activate and use virtual environments."type: notecategories: ["Python", "Cheat Sheets"]tags: ["best practices", "virtual environments", "venv"]draft: false---{{}} What is a Virtual EnvironmentA virtual environment is an isolated environment where the Python interpreter, libraries and scripts installed are kept completely separate from other environments including the system environment installed on your operating system. Creating a Virtual Environment with venvSince Python 3.3. Python has included venv in its standard libraries. To create a virtual environment with the name name_of_venv type `python -m venv name_of_venv` Activating a Virtual EnvironmentTo activate the virtual environment type `name_of_venv\Scripts\activate python`The command line interface will now start with (name_of_venv) to show you it has been activated.You can now install packages with pip: `pip install jupyter` Deactivating a Virtual EnvironmentTo deactivate the virtual environment type `deactivate` Add the Virtual Environment as a Python Kernel in JupyterJupyter uses the default system IPython kernel but you have to manually add a kernel for the virtual environment you just created.To add the virtual enviroment to Jupyter, first you need to:1. activate the environment2. install jupyter (which includes the library ipykernel)3. type `python -m ipykernel install --user --name=name_of_venv` List Kernels added in Jupytertype `jupyter kernelspec list` Remove a Kerneltype `jupyter kernelspec remove name_of_venv` References- [venv Docs](https://docs.python.org/3/library/venv.html)- [IPython Docs](https://ipython.readthedocs.io/en/stable/install/kernel_install.html) ###Code #Versions used in this notebook import sys print("OS:", sys.platform) !python --version from importlib.metadata import version for library in ["jupyterlab"]: print(library, version(library)) ###Output OS: win32 Python 3.9.1 jupyterlab 2.2.0
HW_Image.ipynb
###Markdown Image ###Code import numpy as np import matplotlib.pyplot as plt import astropy.io.fits as fits from astropy.wcs import WCS ###Output _____no_output_____ ###Markdown Image ###Code import numpy as np import matplotlib.pyplot as plt import astropy.io.fits as fits from astropy.wcs import WCS ###Output _____no_output_____ ###Markdown Image ###Code import numpy as np import matplotlib.pyplot as plt import astropy.io.fits as fits from astropy.wcs import WCS ###Output _____no_output_____ ###Markdown Image ###Code import numpy as np import matplotlib.pyplot as plt import astropy.io.fits as fits from astropy.wcs import WCS ###Output _____no_output_____