markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
视频 在每个步骤渲染环境有助于可视化代理的性能。在此之前,我们先创建一个函数,在该 Colab 中嵌入视频。
def embed_mp4(filename): """Embeds an mp4 file in the notebook.""" video = open(filename,'rb').read() b64 = base64.b64encode(video) tag = ''' <video width="640" height="480" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video>'''.format(b64.decode()) return IPython.display.HTML(tag)
_____no_output_____
Apache-2.0
site/zh-cn/agents/tutorials/6_reinforce_tutorial.ipynb
RedContritio/docs-l10n
以下代码用于为代理可视化几个片段的策略:
num_episodes = 3 video_filename = 'imageio.mp4' with imageio.get_writer(video_filename, fps=60) as video: for _ in range(num_episodes): time_step = eval_env.reset() video.append_data(eval_py_env.render()) while not time_step.is_last(): action_step = tf_agent.policy.action(time_step) time_step = eval_env.step(action_step.action) video.append_data(eval_py_env.render()) embed_mp4(video_filename)
_____no_output_____
Apache-2.0
site/zh-cn/agents/tutorials/6_reinforce_tutorial.ipynb
RedContritio/docs-l10n
LogisticRegression with MinMaxScaler & PolynomialTransformer **This Code template is for the Classification task using LogisticRegression with MinMaxScaler feature scaling technique and PolynomialTransformer as Feature Transformation Technique in a pipeline.** Required Packages
!pip install imblearn import warnings as wr import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.preprocessing import LabelEncoder from sklearn.pipeline import make_pipeline from sklearn.preprocessing import MinMaxScaler,PolynomialFeatures from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from imblearn.over_sampling import RandomOverSampler from sklearn.metrics import classification_report,confusion_matrix wr.filterwarnings('ignore')
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
InitializationFilepath of CSV file
#filepath file_path= ""
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
List of features which are required for model training .
#x_values features=['']
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
Target feature for prediction.
#y_value target=''
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
df=pd.read_csv(file_path) #reading file df.head()#displaying initial entries print('Number of rows are :',df.shape[0], ',and number of columns are :',df.shape[1]) df.columns.tolist()
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) def EncodeY(df): if len(df.unique())<=2: return df else: un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort') df=LabelEncoder().fit_transform(df) EncodedT=[xi for xi in range(len(un_EncodedT))] print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT)) return df
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
plt.figure(figsize = (20, 12)) corr = df.corr() mask = np.triu(np.ones_like(corr, dtype = bool)) sns.heatmap(corr, mask = mask, linewidths = 1, annot = True, fmt = ".2f") plt.show()
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
#spliting data into X(features) and Y(Target) X=df[features] Y=df[target] x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=EncodeY(NullClearner(Y)) X.head()
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
Distribution Of Target Variable
plt.figure(figsize = (10,6)) sns.countplot(Y,palette='pastel')
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
#we can choose randomstate and test_size as over requerment X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 123) #performing datasplitting
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
Handling Target ImbalanceThe challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library.
X_train,y_train = RandomOverSampler(random_state=123).fit_resample(X_train, y_train)
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
Feature Transformation**Polynomial Features**Generate polynomial and interaction features.Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, if an input sample is two dimensional and of the form [a, b], the degree-2 polynomial features are [1, a, b, a^2, ab, b^2].Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html) for the parameters Feature Rescaling**MinMaxScaler*** We use MinMaxScaler to scale the data. The MinMaxScaler scaler scale the data beetween 0 to 1* formula for scaling (actual-min / max-min)* We will fit an object of MinMaxScaler to training data then transform the same data by fit_transform(X_train) method Model**Logistic regression :**Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable, although many more complex extensions exist. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model (a form of binary regression). This can be extended to model several classes of events. Model Tuning Parameters1. penalty : {‘l1’, ‘l2’, ‘elasticnet’, ‘none’}, default=’l2’Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. ‘elasticnet’ is only supported by the ‘saga’ solver. If ‘none’ (not supported by the liblinear solver), no regularization is applied.2. C : float, default=1.0Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization.3. tol : float, default=1e-4Tolerance for stopping criteria.4. solver : {‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, default=’lbfgs’Algorithm to use in the optimization problem.For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones.For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes.‘newton-cg’, ‘lbfgs’, ‘sag’ and ‘saga’ handle L2 or no penalty.‘liblinear’ and ‘saga’ also handle L1 penalty.‘saga’ also supports ‘elasticnet’ penalty.‘liblinear’ does not support setting penalty='none'.5. random_state : int, RandomState instance, default=NoneUsed when solver == ‘sag’, ‘saga’ or ‘liblinear’ to shuffle the data.6. max_iter : int, default=100Maximum number of iterations taken for the solvers to converge.7. multi_class : {‘auto’, ‘ovr’, ‘multinomial’}, default=’auto’If the option chosen is ‘ovr’, then a binary problem is fit for each label. For ‘multinomial’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘multinomial’ is unavailable when solver=’liblinear’. ‘auto’ selects ‘ovr’ if the data is binary, or if solver=’liblinear’, and otherwise selects ‘multinomial’.8. verbose : int, default=0For the liblinear and lbfgs solvers set verbose to any positive number for verbosity.9. n_jobs : int, default=NoneNumber of CPU cores used when parallelizing over classes if multi_class=’ovr’”. This parameter is ignored when the solver is set to ‘liblinear’ regardless of whether ‘multi_class’ is specified or not. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors
# Build Model here model=make_pipeline(MinMaxScaler(),PolynomialFeatures(),LogisticRegression(random_state=42)) model.fit(X_train,y_train)
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
Model Accuracyscore() method return the mean accuracy on the given test data and labels.In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100)) #prediction on testing set prediction=model.predict(X_test)
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
Confusion MatrixA confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
cf_matrix=confusion_matrix(y_test,prediction) plt.figure(figsize=(7,6)) sns.heatmap(cf_matrix,annot=True,fmt="d")
_____no_output_____
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
Classification ReportA Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.* **where**: - Precision:- Accuracy of positive predictions. - Recall:- Fraction of positives that were correctly identified. - f1-score:- percent of positive predictions were correct - support:- Support is the number of actual occurrences of the class in the specified dataset.
print(classification_report(y_test,model.predict(X_test)))
precision recall f1-score support 0 0.83 0.83 0.83 96 1 0.72 0.72 0.72 58 accuracy 0.79 154 macro avg 0.78 0.78 0.78 154 weighted avg 0.79 0.79 0.79 154
Apache-2.0
Classification/Linear Models/LogisticRegression_MinMaxScaler_PolynomialFeatures.ipynb
mohityogesh44/ds-seed
Description This file takes the raw xarray files (which can be found at https://figshare.com/collections/Large_ensemble_pCO2_testbed/4568555), applies feature transformations, and saves it into a pandas dataframe. Inputs
# ========================================= # For accessing directories # ========================================= root_dir = "/local/data/artemis/workspace/jfs2167/recon_eval" # Set this to the path of the project ensemble_dir_head = "/local/data/artemis/simulations/LET" # Set this to where you have placed the raw data data_output_dir = f"{root_dir}/data/processed" reference_output_dir = f"{root_dir}/references" xco2_path = f"{ensemble_dir_head}/CESM/member_001/XCO2_1D_mon_CESM001_native_198201-201701.nc" # Forcing is the same across members so only reference it once
_____no_output_____
FTL
notebooks/01_create_datasets.ipynb
charlesm93/ML_for_ocean_pCO2_interpolation
Modules
# standard imports import os import datetime from pathlib import Path from collections import defaultdict import scipy import random import numpy as np import xarray as xr import pandas as pd import joblib import pickle # machine learning libraries from sklearn.model_selection import train_test_split # Python file with supporting functions import pre
Using TensorFlow backend.
FTL
notebooks/01_create_datasets.ipynb
charlesm93/ML_for_ocean_pCO2_interpolation
Predefined values
# Loading references path_LET = f"{reference_output_dir}/members_LET_dict.pickle" path_seeds = f"{reference_output_dir}/random_seeds.npy" path_loc = f"{reference_output_dir}/members_seed_loc_dict.pickle" with open(path_LET,'rb') as handle: mems_dict = pickle.load(handle) random_seeds = np.load(path_seeds) with open(path_loc,'rb') as handle: seed_loc_dict = pickle.load(handle) # ========================================= # Setting the date range to unify the date type # ========================================= # Define date range date_range_start = '1982-01-01T00:00:00.000000000' date_range_end = '2017-01-31T00:00:00.000000000' # create date vector dates = pd.date_range(start=date_range_start, end=date_range_end,freq='MS') + np.timedelta64(14, 'D') # Select the start and end date_start = dates[0] date_end = dates[420]
_____no_output_____
FTL
notebooks/01_create_datasets.ipynb
charlesm93/ML_for_ocean_pCO2_interpolation
Loop to load in data, clean it, and save it
# ensemble_list = ['CanESM2', 'CESM', 'GFDL', 'MPI'] ensemble_list = [] for ens, mem_list in mems_dict.items(): for member in mem_list: # This function loads in the data, cleans it, and creates a pandas data frame df = pre.create_inputs(ensemble_dir_head, ens, member, dates, xco2_path=xco2_path) # Save the pandas data frame to my workspace pre.save_clean_data(df, data_output_dir, ens, member)
_____no_output_____
FTL
notebooks/01_create_datasets.ipynb
charlesm93/ML_for_ocean_pCO2_interpolation
Capítulo 07 : Coleções em Python Capítulo 07b
#1. #a. A = [1, 0, 5, -2, -5, 7] #Lista A print(type(A)) print(A) #b. soma = A[0] + A[1] + A[5] #Somando os índices 0, 1 e 5 print(f'A soma dos índices 0, 1 e 5 é {soma}.') #c. A[4] = 100 #Modificando valor do índice 4 print(A) #d. print(A[0]) print(A[1]) print(A[2]) print(A[3]) print(A[4]) print(A[5]) #2. valores = [int(input()) for i in range(0,6)] #Lista com valores inseridos pelo usuário dentro de um range até 6 print(valores) #3. conj = [float(input()) for i in range(10)] #Criando um vetor tipo float com 10 elementos print(conj) quadrado = [] for i in conj: #Para cada item do conj quadrado1 = i**2 #Quadrado de cada item quadrado.append(quadrado1) #Criando uma lista com o quadrado de cada item print(quadrado) # 4. vetor = [int(input()) for i in range(8)] #Criando vetor com 8 posições print(vetor) x = vetor[3] #O valor 'x' é o índice 3 do vetor y = vetor[7] #O valor 'y' é o índice 7 do vetor soma = x + y #Soma de x e y print(f'X + Y = {soma}') #5. vetor = [int(input()) for i in range(10)] #Vetor com 10 posições print(vetor) par = [] cont = 0 for elemento in vetor: if elemento % 2 == 0: par.append(elemento) cont = cont + 1 print(f'Valores pares do vetor: {par}') print(f'O vetor possui {cont} valores pares.') ## 6. vetor = [int(input()) for i in range(10)] #Vetor com 10 posições print(vetor) print(max(vetor)) #Valor máximo do vetor print(min(vetor)) #Valor mínimo do vetor #7. vetor = [int(input()) for i in range(10)] #Vetor com 10 posições print(vetor) maximo = max(vetor) print(f'O valor máximo do vetor é {maximo}.') posiçao = vetor.index(maximo) print(f'A posição do valor máximo do vetor é {posiçao}.') #8. valor = [int(input()) for item in range(6)] #Vetor com 6 valores valor.reverse() #Valores na forma inversa print(valor) #9. vetor = [] for item in range(6): #Para cada item até 6 valor = int(input('Digite um número par: ')) while valor % 2 != 0: #Enquanto o valor for diferente de par valor = int(input('O número informado não é válido! Digite um número par: ')) if valor % 2 == 0: #Se o valor for par vetor.append(valor) #Lista de valores pares print(vetor) vetor.reverse() #Revertendo a lista de valores pares print(vetor) #10. vetor = [] for item in range(6): #Para cada item até 6 valor = int(input('Digite um número par: ')) while valor % 2 != 0: #Enquanto o valor for diferente de par valor = int(input('O número informado não é válido! Digite um número par: ')) if valor % 2 == 0: #Se o valor for par vetor.append(valor) #Lista de valores pares print(vetor) #11. vetor = [] for item in range(15): nota = float(input('Informe a nota do aluno.')) while nota < 1: nota = float(input('Nota inválida. Informe novamente.')) while nota > 100: nota = float(input('Nota inválida. Informe novamente.')) if nota >= 1 and nota <= 100: vetor.append(nota) print(vetor) soma = sum(vetor) media = soma / 15 print(f'A média geral da turma é {media}.') #12. vetortotal = [] vetorneg = [] vetorpos = [] cont = 0 for item in range(10): #Para cada item em um range até 10 valor = float(input('Informe um valor: ')) #Valor tipo real vetortotal.append(valor) #Vetor com o valor real informado if valor < 0: #Se o valor for negativo cont = cont + 1 vetorneg.append(valor) #Vetor dos valores negativos elif valor > 0: #Se o valor for positivo vetorpos.append(valor) #Vetor dos valores positivos soma = sum(vetorpos) #Soma dos valores do vetor positivo print(f'Vetor: {vetortotal}') print(f'Números positivos: {vetorpos}. Soma dos número positivos: {soma}') print(f'Número negativos: {vetorneg}. O vetor possui {cont} número(s) negativo(s).') #13. vetor = [int(input()) for item in range(5)] maior = max(vetor) menor = min(vetor) posiçao = vetor.index(maior) print(f'O maior valor do vetor é {maior} e sua posição é a {posiçao}.') #14. vetor = [int(input()) for item in range(10)] from collections import Counter #Importando método counter iguais = Counter(vetor) #Conta quantas vezes aparece determinado valor print(f'Valores / Repetições: {iguais}') #15. vetor = [int(input()) for item in range(20)] a = sorted(vetor) #Organiza o vetor em ordem crescente print(a) b= sorted(set(vetor)) #Organiza os vetores em ordem crescente e elimina os valores repetidos print(b) #16. vetor = [] for item in range(5): valor = float(input('Informe um valor')) vetor.append(valor) print(vetor) codigo = int(input('Informe um código (1 ou 2): ')) if codigo != 1 and codigo != 2: print('Código inválido!') if codigo == 1: print(vetor) elif codigo == 2: vetor.reverse() print(vetor) #17. vetor = [] for elemento in range(10): valor = int(input('Informe um valor.')) if valor < 0: #Se o valor informado for menor que 0 vetor.append(0) #Adicionar o valor 0 no vetor else: vetor.append(valor) #Adicionar o valor informado no vetor print(vetor) #18. vetor = [] cont = 0 x = int(input('Informe o valor de x.')) for elemento in range(10): valor = int(input('Informe um valor.')) if valor % x == 0: vetor.append(valor) cont = cont + 1 print(f'O número {x} possui {cont} múltiplo(s): {vetor}') #19. vetor = [] cont = 0 #Indice for item in range(50): elemento = ((cont + 5) * cont) % (cont + 1) vetor.append(elemento) cont = cont +1 #Indice + 1 print(vetor) #20. FALTOU A IMPRESSÃO DE DOIS ELEMENTOS DOS VETORES POR LINHA vetor = [] impar = [] for item in range(10): valor = int(input('Informe um valor no intervalo [0,50]: ')) while valor < 0: valor = int(input('Valor inválido! Informe um valor no intervalo [0,50]: ')) while valor > 50: valor = int(input('Valor inválido. Informe um valor no intervalo [0,50]: ')) vetor.append(valor) if valor % 2 != 0: impar.append(valor) print(vetor) print(impar) #21. a = [] b = [] c = [] for elemento in range(10): valora = int(input('Informe um valor para o vetor A.')) valorb = int(input('Informe um valor para o vetor B.')) valorc = valora - valorb c.append(valorc) print(a.append(valora)) print(a.append(valorb)) print(f'A - B: {c}') #22. a = [] b = [] c = [] pos = 0 #Indice for elemento in range(10): valora = int(input('Informe um valor para o vetor A.')) #Valores do vetor A valorb = int(input('Informe um valor para o vetor B.')) #Valores do vetor B if pos % 2 == 0: #Se o resto da divisao indice/2 = 0 c.append(valora) else: c.append(valorb) pos = pos + 1 a.append(valora) b.append(valorb) print(f'Vetor A: {a}') print(f'Vetor B: {b}') print(f'Vetor C: {c}') #23. a = [] b = [] vetorescalar = [] for elemento in range(5): valora = float(input('Informe um valor para o vetor A.')) #Valores do vetor A valorb = float(input('Informe um valor para o vetor B.')) #Valores do vetor B escalar = valora * valorb #Escalar = x1*y1 vetorescalar.append(escalar) #Criando vetor a partir dos valores do escalar a.append(valora) b.append(valorb) print(a) print(b) print(vetorescalar) print(sum(vetorescalar)) #Vetor escalar de A e B #25. elemento = 0 vetor = [] while len(vetor) < 101: #Enquanto o tamanho do vetor for menor que 101 if elemento % 7 != 0: #Se o resto da divisão por 7 for diferente de 0 vetor.append(elemento) elif elemento % 10 == 7: #Se o resto da divisão por 10 for igual a 7 vetor.append(elemento) elemento = elemento + 1 print(vetor) #26. n = 10 #número de elementos do vetor somav = 0 vetor = [] for elemento in range(10): v = int(input()) somav = somav + v #soma dos elementos do vetor media = somav / n #média do vetor vetor.append(v) #vetor v print(vetor) print(media) somatorio2 = 0 for elemento in vetor: #para cada elemento do vetor somatorio = (elemento - media)**2 #aplicação da parte final da formula DV somatorio2 = somatorio2 + somatorio #aplicação da parte final da formula DV print(somatorio2) a = 1 / (n - 1) #Primeira parte da formula DP DV = (a * somatorio2)**(1/2) #calculo final DV print(f'O desvio padrão do vetor v é {DV}.')
12 14 16 18 19 13 15 17 19 23 [12, 14, 16, 18, 19, 13, 15, 17, 19, 23] 16.6 98.39999999999999 O desvio padrão do vetor v é 3.306559138036598.
MIT
Geek Univesity/Cap07.ipynb
otaviosanluz/learning-Python
Capítulo 07a
#Exemplo. galera = list() dados = list() for c in range(3): dados.append(str(input('Nome: '))) dados.append(int(input('Idade: '))) galera.append(dados[:]) dados.clear() print(galera) #Crie um programa que cria uma matriz de dimensão 3x3 e preencha com valores lidos pelo teclado. No final, mostre a matriz na tela, com a formatação correta. matriz = [[0, 0, 0], [0, 0, 0], [0, 0, 0]] for linha in range(3): for coluna in range(3): matriz[linha][coluna] = int(input(f'Digite um valor para [{linha}, {coluna}]: ')) print('-=' * 30) for linha in range(3): for coluna in range(3): print(f'[{matriz[linha][coluna]:^5}]', end='') print() #Aprimore o desafio anterior, mostrando no final: soma de todos os valores digitados; soma dos valores da terceira coluna; maior valor da segunda linha. #Variáveis matriz = [[0, 0, 0], [0, 0, 0], [0, 0, 0]] spar = 0 #Soma dos pares maior = 0 #Maior valor scol = 0 #Soma da terceira coluna #Preenchendo a matriz for linha in range(3): for coluna in range(3): matriz[linha][coluna] = int(input(f'Digite um valor para [{linha}, {coluna}]: ')) print('-=' * 30) #Organizando a matriz for linha in range(3): for coluna in range(3): print(f'[{matriz[linha][coluna]:^5}]', end='') if matriz[linha][coluna] % 2 == 0: #Soma dos pares spar = spar + matriz[linha][coluna] print() print('-=' * 30) print(f'A soma dos pares é {spar}.') for linha in range(3): #Soma da terceira coluna scol = scol + matriz[linha][2] print(f'A soma dos valores da terceira coluna é {scol}.') for coluna in range(3): if coluna == 0: maior = matriz[1][coluna] elif matriz [1][coluna] > maior: maior = matriz[1][coluna] print(f'O maior valor da segunda linha é {maior}.') #1. matriz = [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]] maior = [] #variável maior que 10 for l in range(4): for c in range(4): matriz[l][c] = int(input(f'Digite um valor para [{l},{c}]:')) if matriz[l][c] > 10: #se os valores forem maior que 10 maior.append(matriz[l][c]) maior.sort() #organizando a lista em ordem crescente #Organizando a matriz for l in range(4): for c in range(4): print(f'[{matriz[l][c]:^5}]', end='') print() print(f'Valores maiores que 10: {maior}') #2. matriz = [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]] for l in range(5): for c in range(5): if l == c: matriz[l][c] = 1 else: matriz[l][c] = 0 for l in range(5): for c in range(5): print(f'[{matriz[l][c]:^5}]', end='') print() #3. matriz = [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]] for l in range(4): for c in range(4): matriz[l][c] = l * c for l in range(4): for c in range(4): print(f'[{matriz[l][c]:^5}]', end='') print() #4. matriz = [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]] maior = 0 posiçao = (0,0) for l in range(4): for c in range(4): matriz[l][c] = (int(input(f'Digite um valor para [{l},{c}]:'))) if matriz[l][c] > maior: maior = matriz[l][c] posiçao = (l,c) for l in range(4): for c in range(4): print(f'[{matriz[l][c]:^5}]', end='') print() print('-=' * 30) print(f'O maior valor é {maior} e encontra-se na posição {posiçao}.') #5.Leia uma matriz 5x5. Leia também um valor X. O programa deverá fazer uma busca desse valor na matriz e, ao final, escrever a localização(linha,coluna) ou uma mensagem de 'não encontrado'. matriz = [] x = int(input('Informe um valor para X: ')) posiçao = [] for i in range(5): linha = [] for j in range(5): valor = (int(input(f'Valor da posição:[{i},{j}]'))) linha.append(valor) if valor == x: posiçao.append((i,j)) print(linha) matriz.append(linha) print(posiçao) #Exemplo Dicionários pessoas = {'nome':'Gustavo', 'sexo':'M', 'idade':22} print(pessoas) print(f'O {pessoas["nome"]} tem {pessoas["idade"]} anos.') print(pessoas.keys()) print(pessoas.values()) print(pessoas.items()) for k in pessoas.keys(): print(k) for v in pessoas.values(): print(v) for k,v in pessoas.items(): print(f'{k} = {v}') del pessoas['sexo'] print(pessoas) pessoas['nome'] = 'Leandro' print(pessoas) pessoas['peso'] = 98.5 print(pessoas) #Criando dicionário dentro de uma lista brasil = [] estado1 = {'uf':'Rio de Janeiro', 'sigla':'RJ'} estado2 = {'uf': 'São Paulo', 'sigla':'SP'} brasil.append(estado1) brasil.append(estado2) print(estado1) print(estado2) print(brasil) print(brasil[0]) print(brasil[1]['uf']) #Exemplo estado = dict() brasil = list() for c in range(3): estado['uf'] = str(input('Unidade Federativa:')) estado['sigla'] = str(input('Sigla:')) brasil.append(estado.copy()) print(brasil) for e in brasil: for v in e.values(): print(v, end=' ') print() #Faça um programa que leia 5 valores numéricos e guarde-os em uma lista. No final, mostre qual foi o maior e o menor valor digitado e as susas respectivas posições na lista listanum = [] maior = 0 menor = 0 posiçaomaior = [] posiçaomenor = [] for c in range(5): valor = (int(input(f'Informe um valor para a posição {c}:'))) listanum.append(valor) if c == 0: maior = menor = listanum[c] else: if listanum[c] > maior: maior = listanum[c] if listanum[c] < menor: menor = listanum[c] for i,v in enumerate(listanum): if v == maior: posiçaomaior.append(i) for i,v in enumerate(listanum): if v == menor: posiçaomenor.append(i) print(f'Você digitou os valores {listanum}.') print(f'O número {maior} foi o maior valor encontrado na(s) posição(ões): {posiçaomaior}.') print(f'O número {menor} foi o menor valor encontrado na(s) posição(ões): {posiçaomenor}.') #Crie um programa onde o usuário possa digitar vários valores numéricos e cadastre-se em uma lista. Caso o número já exista lá dentro, ele não será adicionado. No final, serão exibidos todos os valores únicos digitados, em ordem crescente. numeros = [] while True: n = int(input('Digite um valor:')) if n not in numeros: numeros.append(n) print('Valor adicionado!') else: print('valor duplicado! Não será adicionado.') r = str(input('Quer continuar? [S/N]')) if r in 'Nn': break numeros.sort() print(f'Valores digitados: {numeros}') #Crie um programa onde o usuário possa digitar cinco valores numéricos e cadastre-se em uma lista, já na posição correta de inserção(sem usar o sort()). No final, mostre a lista ordenada na tela. lista = [] for c in range(5): n = int(input('Digite um valor:')) if c == 0: lista.append(n) elif n > lista[-1]: #Se o n for maior que o último elemento da lista lista.append(n) else: pos = 0 while pos < len(lista): if n <= lista[pos]: lista.insert(pos, n) break pos = pos + 1 print(f'Valores digitados em ordem crescente: {lista}') # Crie um programa que vai ler vários números e colocar em uma lista. Depois disso, mostre: (a) Quantos números foram digitados (b) A lista de valores, ordenada de forma decrescente (c) Se o valor 5 foi digitado e esta ou não na lista valores = [] while True: valores.append(int(input('Digite um valor:'))) resp = str(input('Quer continuar? [S/N]')) if resp in 'Nn': break print(f'Você digitou {len(valores)} elementos.') valores.sort(reverse=True)#Ordem decrescente print(f'Valores em ordem decrescente: {valores}.') if 5 in valores: print('O valor 5 faz parte da lista.') else: print('O valor 5 não faz parte da lista.') #Crie um programa que vai ler vários números e colocar em uma lista. Depois disso, crie duas listas extras que vão conter apenas os valores pares e os valores ímpares digitados, respectivamente. Ao final, mostre o conteúdo das três listas geradas. lista = [] while True: lista.append(int(input('Informe um valor:'))) resp = str(input('Quer continuar? [S/N]')) if resp in 'Nn': break listapar = [] listaimpar = [] for valor in lista: if valor % 2 == 0: listapar.append(valor) else: listaimpar.append(valor) print(lista) print(f'Lista com valores pares: {listapar}') print(f'Lista com valor ímpares: {listaimpar}') #Crie um programa onde o usuário digite uma expressão qualquer que use parênteses. Seu aplicativo deverá analisar se a expressão passada está com os parênteses abertos e fechados na ordem correta. exp = str(input('Digite a expressão: ')) pilha = [] for simbolo in exp: if simbolo == '(': pilha.append('(') elif simbolo == ')': if len(pilha) > 0: pilha.pop() else: pilha.append(')') break if len(pilha) == 0: print('Sua expressão esta válida!') else: print('Sua expressão esta errada!') #Faça um programa que leia nome e peso de várias pessoas, guardando tudo em uma lista. No final mostre: (a)Quantas pessoas foram cadastradas. (b) Uma listagem com as pessoas mais pesadas. (c) Uma listagem com as pessoas mais leves. lista = [] principal = [] pessoas = 0 maior = 0 menor = 0 while True: nome = str(input('Informe o nome da pessoa:')) peso = float(input( f'Informe o peso de {nome}: ')) lista.append(nome) lista.append(peso) if len(principal) == 0: maior = menor = lista[1] else: if lista[1] > maior: maior = lista[1] if lista[1] < menor: menor = lista[1] pessoas = pessoas + 1 principal.append(lista[:]) lista.clear() resp = str(input('Quer continuar? [S/N]')) if resp in 'Nn': break print(pessoas) print(principal) print(f'Maior peso: {maior}kg') for p in principal: if p[1] == maior: print(f'{p[0]}') print(f'Menor peso: {menor}kg') for p in principal: if p[1] == menor: print(f'{p[0]}') #Faça um programa que leia nome e média de um aluno, guardando também a situação em um dicionário. No final, mostre o conteúdo da estrutura na tela. aluno = {} aluno['nome'] = str(input('Informe o nome do aluno:')) aluno['média'] = float(input('Informe a média do aluno: ')) if aluno['média'] >= 7: aluno['situação'] = 'Aprovado' else: aluno['situação'] = 'Reprovado' print(aluno) #Crie um programa onde 4 jogadores joguem um dado e tenham resultados aleatórios. Guarde esses resultados em um dicionário. No final, coloque esse dicionário em ordem, sabendo que o vencedor tirou o maior número no dado. from random import randint #método para gerar números aleatórios from operator import itemgetter jogo = {'Jogador1':randint(1, 6), 'Jogador2':randint(1, 6), 'Jogador3':randint(1, 6), 'Jogador4':randint(1, 6)} ranking = [] print('Valores sorteados:') for k,v in jogo.items(): print(f'{k} tirou {v} no dado.') ranking = sorted(jogo.items(), key=itemgetter(1), reverse=True) print(ranking) #reverse=True ordena na ordem decrescente #Crie um programa que leia, nome, ano de nascimento e carteira de trabalho e cadastre-os (com idade) em um dicionário. Se, por acaso, a CTPS for diferente de zero, o dicionário receberá também o ano de contratação e o salário. Calcule e acrescente, além da idade, com quantos anos a pessoa vai se aposentar. from datetime import datetime #Importando o ano do computador dados = {} dados['nome'] = str(input('Nome:')) nasc = int(input('Ano de nascimento: ')) dados['idade'] = datetime.now().year - nasc #ano atual - ano de nascimento dados['ctps'] = int(input('Carteira de Trabalho (0 não tem): ')) if dados['ctps'] != 0: dados['contratação'] = int(input('Ano de Contratação: ')) dados['salário'] = float(input('Salário: R$')) dados['aposentadoria'] = dados['idade'] + ((dados['contratação'] + 35) - datetime.now().year) print(dados) #Crie um programa que gerencie o aproveitamento de um jogador de futebol. O programa vai ler o nome do jogador e quantas partidas ele jogou. Depois vai ler a quantidade de gols feitos em cada partida. No final, tudo isso será guardado em um dicionário, incluindo o total de gols feitos durante o campeonato. jogador = {} jogador['nome'] =str(input('Nome do jogador:')) tot = int(input(f'Quantas partidas {jogador["nome"]} jogou: ')) partidas = [] for c in range(tot): partidas.append(int(input(f'Quantos gols na partida {c+1}:'))) jogador['gols'] = partidas[:] jogador['total'] = sum(partidas) print(jogador) #Crie um programa que leia nome, sexo e idade de várias pessoas, guardando os dados de cada pessoa em um dicionário e todos os dicionários em uma lista. No final, mostre: (a)Quantas pessoas cadastradas. (b)A média de idade. (c)Uma lista com mulheres. (d)Uma lista com idade acima da média. galera = [] pessoa = {} soma = media = 0 while True: pessoa.clear pessoa['nome'] = str(input('Nome:')) while True: pessoa['sexo'] = str(input('Sexo: [M/F]')).upper()[0] if pessoa['sexo'] in 'MmFf': break print('ERRO! Por favor, digite apenas M ou F.') pessoa['idade'] = int(input('Idade:')) soma = soma + pessoa['idade'] galera.append(pessoa.copy()) while True: resp = str(input('Quer continuar? [S/N]')).upper()[0] if resp in 'SN': break print('ERRO! Responda apenas S ou N.') if resp == 'N': break print(galera) print(f'Ao todo temos {len(galera)} pessoas cadastradas.') media = soma / len(galera) print(f'A média de idade é de {media} anos.') print(f'As mulheres cadastradas foram', end='') for p in galera: if p['sexo'] == 'F': print(f'{p["nome"]}', end='') print() print('Lista das pessoas que estão acima da média: ') for p in galera: if p['idade'] >= media: print(' ') for k,v in p.items(): print(f'{k} = {v}', end='') print() #6. matriz1 = [] matriz2 = [] matriz3 = [] for i in range(4): linha1 = [] linha2 = [] linha3 = [] for j in range(4): valor1 = (int(input(f'Matriz 1 posição [{i+1},{j+1}:]'))) linha1.append(valor1) valor2 = (int(input(f'Matriz 2 posição [{i+1},{j+1}:]'))) linha2.append(valor2) if valor1 == valor2: linha3.append(valor1) elif valor1 > valor2: linha3.append(valor1) elif valor1 < valor2: linha3.append(valor2) matriz1.append(linha1[:]) matriz2.append(linha2[:]) matriz3.append(linha3[:]) print(matriz1) print(matriz2) print(matriz3) #7. matriz = [] for i in range(10): linha = [] for j in range(10): if i < j: valor = (2*i) + (7*j) - 2 linha.append(valor) elif i == j: valor = (3*(i**2)) - 1 linha.append(valor) elif i > j: valor = (4*(i**3)) - (5*(j**2)) + 1 linha.append(valor) print(linha) matriz.append(linha[:]) #8. matriz = [] soma = [] for i in range(3): linha = [] for j in range(3): valor = int(input(f'Informe um valor para a posição [{i+1},{j+1}]')) linha.append(valor) if j > i: soma.append(valor) matriz.append(linha[:]) print(matriz) s = sum(soma) print(f'A soma dos valores acima da diagonal principal é {s}.') #9. matriz = [] soma = [] for i in range(3): linha = [] for j in range(3): valor = int(input(f'Informe um valor para a posição [{i+1}.{j+1}]')) linha.append(valor) if i > j: soma.append(valor) matriz.append(linha[:]) print(matriz) s = sum(soma) print(f'A soma dos valores abaixo da diagonal principal é {s}.') #10. matriz = [] soma = [] for i in range(3): linha = [] for j in range(3): valor = int(input(f'Informe um valor para a posição [{i+1},{j+1}]')) linha.append(valor) if i == j: soma.append(valor) matriz.append(linha[:]) print(matriz) s = sum(soma) print(f'A soma dos valores da diagonal principal é {s}.') #13. matriz = [] for i in range(4): linha = [] for j in range(4): valor = int(input(f'Informe um valor para a posição [{i+1},{j+1}]:')) linha.append(valor) matriz.append(linha[:]) print(matriz) #14. from random import randint bingo = [] for i in range(5): linha = [] for j in range(5): valor = int(randint(1,99)) linha.append(valor) bingo.append(linha[:]) print(bingo) #18. matriz = [] vetor = [] cont1 = 0 cont2 = 0 cont3 = 0 for i in range(3): linha = [] for j in range(3): valor = int(input(f'Informe um valor para a posição [{i+1},{j+1}]:')) linha.append(valor) if j == 0: cont1 = cont1 + valor elif j == 1: cont2 = cont2 + valor elif j == 2: cont3 = cont3 + valor matriz.append(linha) vetor.append(cont1) vetor.append(cont2) vetor.append(cont3) print(vetor) print(matriz) #19. FALTA LETRA C matriz = [] maiornota = 0 media = 0 somanotas = 0 for i in range(1, 6): linha = [] for j in range(1, 5): if j == 1: matricula = int(input(f'Número de matrícula do aluno {i}:')) linha.append(matricula) if j == 2: prova = int(input(f'Média das provas do aluno {i}:')) linha.append(prova) if j == 3: trabalho = int(input(f'Média dos trabalhos do aluno {i}:')) linha.append(trabalho) if j == 4: notafinal = prova + trabalho print(f'Nota final do aluno {i}: {notafinal}') somanotas = somanotas + notafinal if notafinal > maiornota: maiornota = notafinal linha.append(notafinal) print('=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-') matriz.append(linha[:]) print(f'A maior nota é {maiornota}.') media = somanotas / 5 print(f'Média aritmética das notas finais: {media}.') print(matriz) #20. matriz = [] soma1 = 0 soma2 = 0 soma3 = 0 soma4 = 0 for i in range(1, 4): linha = [] for j in range(1, 7): valor = int(input(f'Valor para a posição [{i},{j}]:')) if j % 2 != 0: soma1 = soma1 + valor elif j == 2: soma2 = soma2 + valor elif j == 4: soma2 = soma2 + valor linha.append(valor) matriz.append(linha[:]) media = soma2 / 6 print(f'Soma dos elementos das colunas ímpares: {soma1}') print(f'Média dos elementos da segunda e quarta coluna: {media}') print(matriz) #21. matriz1 = [] matriz2 = [] matrizsoma = [] matrizsub = [] for i in range(1, 3): linha1 = [] linha2 = [] linhasoma = [] linhasub = [] for j in range(1, 3): valor1 = float(input(f'Matriz 1 posição [{i},{j}]:')) valor2 = float(input(f'Matriz 2 posição [{i},{j}]:')) linha1.append(valor1) linha2.append(valor2) soma = valor1 + valor2 linhasoma.append(soma) sub = valor1 - valor2 linhasub.append(sub) matriz1.append(linha1[:]) matriz2.append(linha2[:]) matrizsoma.append(linhasoma) matrizsub.append(linhasub) print('-=-=-=-=-=-=-=-=-=-=') print('Matriz 1') print(matriz1) print('-=-=-=-=-=-=-=-=-=-=') print('Matriz 2') print(matriz2) print('-=-=-=-=-=-=-=-=-=-=') print('Matriz 1 + Matriz 2') print(matrizsoma) print('-=-=-=-=-=-=-=-=-=-=') print('Matriz 1 - Matriz 2') print(matrizsub)
Matriz 1 posição [1,1]:1 Matriz 2 posição [1,1]:2 Matriz 1 posição [1,2]:3 Matriz 2 posição [1,2]:4 Matriz 1 posição [2,1]:5 Matriz 2 posição [2,1]:6 Matriz 1 posição [2,2]:7 Matriz 2 posição [2,2]:8 -=-=-=-=-=-=-=-=-=-= Matriz 1 [[1.0, 3.0], [5.0, 7.0]] -=-=-=-=-=-=-=-=-=-= Matriz 2 [[2.0, 4.0], [6.0, 8.0]] -=-=-=-=-=-=-=-=-=-= Matriz 1 + Matriz 2 [[3.0, 7.0], [11.0, 15.0]] -=-=-=-=-=-=-=-=-=-= Matriz 1 - Matriz 2 [[-1.0, -1.0], [-1.0, -1.0]]
MIT
Geek Univesity/Cap07.ipynb
otaviosanluz/learning-Python
Creating a more elegant bounding box
import cv2 import numpy as np from google.colab.patches import cv2_imshow def draw_bbox(img, coord1, coord2, color1=(6, 253, 2), color2=(4,230,1), r=5, d=5, thickness=1): x1, y1 = coord1 x2, y2 = coord2 if color2 is not None: # Top left cv2.line(img, (x1 + r, y1), (x1 + r + d, y1), color2, thickness+1) cv2.line(img, (x1, y1 + r), (x1, y1 + r + d), color2, thickness+1) cv2.ellipse(img, (x1 + r, y1 + r), (r, r), 180, 0, 90, color2, thickness+1) # Top right cv2.line(img, (x2 - r, y1), (x2 - r - d, y1), color2, thickness+1) cv2.line(img, (x2, y1 + r), (x2, y1 + r + d), color2, thickness+1) cv2.ellipse(img, (x2 - r, y1 + r), (r, r), 270, 0, 90, color2, thickness+1) # Bottom left cv2.line(img, (x1 + r, y2), (x1 + r + d, y2), color2, thickness+1) cv2.line(img, (x1, y2 - r), (x1, y2 - r - d), color2, thickness+1) cv2.ellipse(img, (x1 + r, y2 - r), (r, r), 90, 0, 90, color2, thickness+1) # Bottom right cv2.line(img, (x2 - r, y2), (x2 - r - d, y2), color2, thickness+1) cv2.line(img, (x2, y2 - r), (x2, y2 - r - d), color2, thickness+1) cv2.ellipse(img, (x2 - r, y2 - r), (r, r), 0, 0, 90, color2, thickness+1) # Top left cv2.line(img, (x1 + r, y1), (x1 + r + d, y1), color1, thickness) cv2.line(img, (x1, y1 + r), (x1, y1 + r + d), color1, thickness) cv2.ellipse(img, (x1 + r, y1 + r), (r, r), 180, 0, 90, color1, thickness) # Top right cv2.line(img, (x2 - r, y1), (x2 - r - d, y1), color1, thickness) cv2.line(img, (x2, y1 + r), (x2, y1 + r + d), color1, thickness) cv2.ellipse(img, (x2 - r, y1 + r), (r, r), 270, 0, 90, color1, thickness) # Bottom left cv2.line(img, (x1 + r, y2), (x1 + r + d, y2), color1, thickness) cv2.line(img, (x1, y2 - r), (x1, y2 - r - d), color1, thickness) cv2.ellipse(img, (x1 + r, y2 - r), (r, r), 90, 0, 90, color1, thickness) # Bottom right cv2.line(img, (x2 - r, y2), (x2 - r - d, y2), color1, thickness) cv2.line(img, (x2, y2 - r), (x2, y2 - r - d), color1, thickness) cv2.ellipse(img, (x2 - r, y2 - r), (r, r), 0, 0, 90, color1, thickness) return img img = np.ones((140, 105, 3), np.uint8)*255 img = draw_bbox(img, (5,5), (100,30), color1=(6, 253, 2), color2=(4,230,1), r=5, d=4, thickness=1) img = draw_bbox(img, (5,40), (100, 65), color1=(6, 253, 2), color2=None, r=5, d=4, thickness=1) img = draw_bbox(img, (5,75), (100,100), color1=(255,186,143), color2=(205,80,42), r=5, d=4, thickness=1) img = draw_bbox(img, (5,110), (100,135), color1=(109,114,210), color2=(10,2,122), r=5, d=4, thickness=1) cv2_imshow(img)
_____no_output_____
MIT
BoundingBoxBeautiful.ipynb
escudero/BoundingBoxBeautiful
Chapter 4: Trees and Graphs
# Adjancency list graph class Graph(): def __init__(self): self.nodes = [] def add(node): self.nodes.append(node) class Node(): def __init__(self, name, adjacent=[],marked=False): self.name = name self.adjacent = adjacent self.marked = marked
_____no_output_____
Apache-2.0
chapter4.ipynb
hangulu/ctci
4.1 Route Between NodesGiven a directed graph, design an algorithm to find out whether there is a route between two nodes.
# Bidirectional breadth-first search. def route(node1, node2): # Create the queue queue1 = [] queue2 = [] node1.marked = True node2.marked = True queue1.append(node1) queue2.append(node2) while (len(queue1) > 0) and (len(queue2) > 0): n1 = queue1.pop(0) n2 = queue2.pop(0) if n1 == n2: return True queue1 = check_neighbor(node1, queue1) queue2 = check_neighbor(node2, queue2) def check_neighbor(node, queue): for neighbor in node.adjacent: if (neighbor.marked == False): neighbor.marked = True queue.append(neighbor) return queue test = [] while test: print(True)
_____no_output_____
Apache-2.0
chapter4.ipynb
hangulu/ctci
4.2 Minimal TreeGiven a sorted (increasing order) array with unique integer elements, write an algorithm to create a binary search tree with minimal height.
class Node(): def __init__(self, data, left=None, right=None): self.data = data self.left = left self.right = right def minimal(array): start = 0 end = len(array) - 1 if end < start: return None mid = int((start + end) / 2) # Recursively place the nodes return Node(mid, minimal(array, start, mid - 1), minimal(array, mid + 1, end))
_____no_output_____
Apache-2.0
chapter4.ipynb
hangulu/ctci
4.3 List of DepthsGiven a binary tree, design an algorithm which creates a linked list of all the nodes at each depth (e.g., if you have a tree with depth D, you'll have D linked lists).
class LinkedList(): def __init__(self, data): self.data = data self.next = None def insert(node): self.next = node def list_of_depths(root, full_list, level): if root is None: return lnkedlst = LinkedList(None) if len(full_list) == level: lnkedlst = LinkedList(None) full_list.append(lnkedlst) else: lnkedlst = full_list[level] lnkedlst.insert(root) list_of_depths(root.left, full_list, level + 1) list_of_depths(root.right, full_list, level + 1) def levels_linked(root): full_list = [] list_of_depths(root, full_list, 0) return full_list
_____no_output_____
Apache-2.0
chapter4.ipynb
hangulu/ctci
4.4 Check BalancedImplement a function to check if a binary tree is balanced. For the purposes of this question, a balanced tree is defined to be a tree such that the heights of the two subtrees of any node never differ by more than one.
def check_height(root): # Define an error code to return if a sub-tree is not balanced min_int = -sys.maxsize - 1 # The height of the null tree is -1 if root is None: return -1 # Check the height of the left sub-tree. If it's min-int, it is unbalanced left_height = check_height(root.left) if left_height == min_int: return min_int # Check the height of the right sub-tree right_height = check_height(root.right) if right_height == min_int: return min_int # Calculate the height difference. If it's more than one, return min_int (the error code). Else, return the height of the balanced sub-tree + 1 height_diff = left_height - right_height if abs(height_diff) > 1: return min_int else: return max(left_height, right_height) + 1 def is_balanced(root): return check_height(root) != (-sys.maxsize - 1)
_____no_output_____
Apache-2.0
chapter4.ipynb
hangulu/ctci
4.5 Validate BSTImplement a function to check if a binary tree is a binary search tree.
# A binary tree is a binary search tree if the left child is lower than the parent node which is lower than the right node (pre-order traversal) def validate_bst(root): to_visit = [root] while len(to_visit) > 0: node = to_visit.pop(0) if node.left.data is not None: if node.data <= node.left.data: return False else: to_visit.append(node.left) if node.right.data is not None: if node.data > node.right.data: return False to_visit.append(node.right) return True # The above method only works for checking the node and it's children, without checking against the values in the rest of the tree. Below corrects for that. def val_bst(root): # Call the recursive function return validate_helper(root, None, None) def validate_helper(node, mini, maxi): # Return True if the node is None if node is None: return True # A given node cannot be greater than the recorded maximum, or less than the recorded minimum # If the minimum has been set and the node is less than or equal to the minimum, return False. # If the maximum has been set and the node is greater than or equal to the maximum, return False if (mini is not None and node.data <= mini) or (maxi is not None and node.data > maxi): return False # If either sub-tree is False, return False if (not validate_helper(node.left, mini, node.data)) or (not validate_helper(node.right, node.data, maxi)): return False return True
_____no_output_____
Apache-2.0
chapter4.ipynb
hangulu/ctci
4.6 SuccessorWrite an algorithm to find the "next" node (i.e., in-order successor) of a given node in a binary search tree. You may assume that each node has a link to its parent.
def successor(node): if node is None: return None # If the node has a right sub-tree, return the leftmost node in that sub-tree if node.right is not None: return leftmost(node.right) else: q = node # Find node's parent x = q.parent # Not implemented in my Node class, but an assumption from the question # Iterate until node is not the left child of its parent (left -> current -> right) while x is not None and x.left != q: q = x x = x.parent return x def leftmost(node): if node is None: return None while node.left is not None: node = node.left return node
_____no_output_____
Apache-2.0
chapter4.ipynb
hangulu/ctci
4.7 Build OrderYou are given a list of projects and a list of dependencies (which is a list of pairs of projects, where the second project is dependent on the first project). All of a project'sdependencies must be built before the project is. Find a build order that will allow the projects to be built. If there is no valid build order, return an error.
# Topological sort using DFS # Store the possible states of nodes class State(): BLANK = 0 PARTIAL = 1 COMPLETED = 2 # Create a class to store each vertex class Vertex(): # Store the vertex's data, state and adjacent vertices def __init__(self, key): self.id = key self.adj = set() self.state = State.BLANK # Add an edge if it does not already exist def add_edge(self, proj): self.adj.add(proj) def get_edges(self): return self.adj def get_id(self): return self.id def set_state(self, state): self.state = state def get_state(self): return self.state # Create a class to store the entire graph class Graph(): # Store a dict of vertices and the number of them def __init__(self): self.vertices = dict() # key = id, value = vertex self.num = 0 # Add vertices by using the dictionary to map from the id to the Vertex object def add_vertex(self, key): self.num += 1 self.vertices[key] = Vertex(key) # Retrieve vertices by their keys def get_vertex(self, key): if key in self.vertices: return self.vertices[key] else: return None def __contains__(self, data): return data in self.vertices # Add an edge to the vertices list if it doesn't exist there def add_edge(self, end1, end2): if end1 not in self.vertices: self.add_vertex(end1) if end2 not in self.vertices: self.add_vertex(end2) # Connect the first end to the second end self.vertices[end1].add_edge(self.vertices[end2]) def get_vertices(self): return self.vertices.keys() # Create an iterable for the graph def __iter__(self): return iter(self.vertices.values()) # Reset all the states to BLANK def reset_states(self): for proj in iter(self): proj.set_state(State.BLANK) def populate_result(graph): result = [] for proj in iter(graph): if not dfs(result, proj): return None return result # Recursive DFS def dfs(result, proj): if proj.get_state() == State.PARTIAL: return False # If the state of the current project is BLANK, visit if proj.get_state() == State.BLANK: proj.set_state(State.PARTIAL) # For every vertex in proj's adjacency list, perform DFS for adj in proj.get_edges(): if not dfs(result, adj): return False proj.set_state(State.COMPLETED) # Insert the project id to the result list result.insert(0, proj.get_id()) return True def build_order(projects, dependencies): graph = Graph() for proj in projects: graph.add_vertex(proj) for to, fro in dependencies: graph.add_edge(fro, to) return populate_result(graph) projects = [ "a", "b", "c", "d", "e", "f" ] dependencies = [ ("d", "a"), ("b", "f"), ("d", "b"), ("a", "f"), ("c", "d") ] print(build_order(projects, dependencies))
['f', 'e', 'b', 'a', 'd', 'c']
Apache-2.0
chapter4.ipynb
hangulu/ctci
4.8 First Common AncestorDesign an algorithm and write code to find the first common ancestor of two nodes in a binary tree. Avoid storing additional nodes in a data structure. NOTE: This is not necessarily a binary search tree.
def common_ancestor(root, node1, node2): # Check if both nodes are in the tree if (not covers(root, node1)) or (not covers(root, node2)): return None return ancestor_helper(root, node1, node2) def ancestor_helper(root, node1, node2): if root is None or root == node1 or root == node2: return root node1_on_left = covers(root.left, node1) node2_on_left = covers(root.left, node2) # Check if nodes are on the same side if not (node1_on_left and node2_on_left): return root # Find the sub-tree of the child_node child_side = root.left if node1_on_left else root.right return ancestor_helper(child_side, node1, node2) # The tree covers the node if the node is somewhere in the stree's sub-trees def covers(root, node1): if root is None: return False if root == node1: return True return covers(root.left, node1) or covers(root.right, node2)
_____no_output_____
Apache-2.0
chapter4.ipynb
hangulu/ctci
4.9 BST SequencesA binary search tree was created by traversing through an array from left to right and inserting each element. Given a binary search tree with distinct elements, print all possible arrays that could have led to this tree.
def bst_sequences(root): result = [] if root is None: result.append([]) return result prefix = [] prefix.append(root.data) # Recurse on the left and right sub-trees left_seq = bst_sequences(root.left) right_seq = bst_sequences(root.right) # Weave together the lists for left in left_seq: for right in right_seq: weaved = [] weave_lists(left, right, weaved, prefix) result.extend(weaved) return result def weave_lists(first, second, results, prefix): if len(first) == 0 or len(second) == 0: result = prefix.copy() result.extend(first) result.extend(second) results.append(result) return head_first = first.pop(0) prefix.append(head_first) weave_lists(first, second, results, prefix) prefix.pop() first.insert(0, head_first) head_second = second.pop() prefix.append(head_second) weave_lists(first, second, results, prefix) prefix.pop() second.insert(0, head_second)
_____no_output_____
Apache-2.0
chapter4.ipynb
hangulu/ctci
4.10 Check SubtreeT1 and T2 are two very large binary trees, with T1 much bigger than T2. Create an algorithm to determine if T2 is a subtree of T1.A tree T2 is a subtree of T1 if there exists a node n in T1 such that the subtree of n is identical to T2.That is, if you cut off the tree at node n, the two trees would be identical.
# The following approach converts the trees to strings based on pre-order traversal (node -> left -> right). If one string is a sub-string of the other, it is a sub-tree def contains_tree(node1, node2): string1 = "" string2 = "" get_order(node1, string1) get_order(node2, string2) return string2 in string1 def get_order(node, string): if node is None: # Add a null indicator string += "X" return string += (node.data + " ") # Add the root # Add left and right get_order(node.left, string) get_order(node.right, string)
_____no_output_____
Apache-2.0
chapter4.ipynb
hangulu/ctci
4.11 Random NodeYou are implementing a binary tree class from scratch which, in addition to insert, find, and delete, has a method getRandomNode() which returns a random node from the tree. All nodes should be equally likely to be chosen. Design and implement an algorithm for getRandomNode, and explain how you would implement the rest of the methods.
import random # Create a tree class that stores the size of the tree class Tree(): def __init__(self, root=None): self.root = root self.size = 0 if self.root is None else self.root.size() def get_random(): if root is None: return None # The index is a random number between 0 and the size of the tree index = random.randint(0, self.size()) return root.get_node(index) # Insert a value into the tree def insert_in_order(value): if root is None: root = RandomNode(value) else: root.insert_in_order(value) # A class for each node of the tree. Stores data, left, right, and the size class RandomNode(): def __init__(self, data=None): self.data = data self.left = None self.right = None self.size = 0 # Increment the size of the left until the index def get_node(index): left_size = 0 if self.left is None else self.left.size() if index < self.left_size: return self.left.get_node(index) elif index == self.left_size: return self else: return self.right.get_node(index - (left_size + 1)) def insert_in_order(value): if value <= self.data: if self.left is None: self.left = RandomNode(value) else: self.left.insert_in_order(value) else: if self.right is None: self.right = RandomNode(value) else: self.right.insert_in_order(value) self.size += 1 def size(): return self.size def find(value): if value == self.data: return self elif value <= self.data: return self.left.find(value) if self.left is not None else None elif value > self.data: return self.right.find(value) if self.right is not None else None return None
_____no_output_____
Apache-2.0
chapter4.ipynb
hangulu/ctci
4.12 Paths with SumYou are given a binary tree in which each node contains an integer value (which might be positive or negative). Design an algorithm to count the number of paths that sum to a given value. The path does not need to start or end at the root or a leaf, but it must go downwards (traveling only from parent nodes to child nodes).
def count_sum_paths(node, target_sum, running_sum, path_count): if node is None: return 0 running_sum += node.data cur_sum = running_sum - target_sum if cur_sum in path_count: total_paths = path_count[cur_sum] else: total_paths = 0 if running_sum == target_sum: total_paths += 1 increment_hash(path_count, running_sum, 1) total_paths += count_sum_paths(node.left, target_sum, running_sum, path_count) total_paths += count_sum_paths(node.right, target_sum, running_sum, path_count) increment_hash(path_count, running_sum, -1) return total_paths def increment_hash(path_count, key, delta): new_count = (key + delta) if key in path_count else (0 + delta) if new_count == 0: del path_count[key] else: path_count[key] = new_count
_____no_output_____
Apache-2.0
chapter4.ipynb
hangulu/ctci
An Introduction to the Amazon SageMaker IP Insights Algorithm Unsupervised anomaly detection for susicipous IP addresses-------1. [Introduction](Introduction)2. [Setup](Setup)3. [Training](Training)4. [Inference](Inference)5. [Epilogue](Epilogue) Introduction-------The Amazon SageMaker IP Insights algorithm uses statistical modeling and neural networks to capture associations between online resources (such as account IDs or hostnames) and IPv4 addresses. Under the hood, it learns vector representations for online resources and IP addresses. This essentially means that if the vector representing an IP address and an online resource are close together, then it is likey for that IP address to access that online resource, even if it has never accessed it before.In this notebook, we use the Amazon SageMaker IP Insights algorithm to train a model on synthetic data. We then use this model to perform inference on the data and show how to discover anomalies. After running this notebook, you should be able to:- obtain, transform, and store data for use in Amazon SageMaker,- create an AWS SageMaker training job to produce an IP Insights model,- use the model to perform inference with an Amazon SageMaker endpoint.If you would like to know more, please check out the [SageMaker IP Inisghts Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights.html). Setup------*This notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.*Our first step is to setup our AWS credentials so that AWS SageMaker can store and access training data and model artifacts. Select Amazon S3 BucketWe first need to specify the locations where we will store our training data and trained model artifacts. ***This is the only cell of this notebook that you will need to edit.*** In particular, we need the following data:- `bucket` - An S3 bucket accessible by this account.- `prefix` - The location in the bucket where this notebook's input and output data will be stored. (The default value is sufficient.) A few notes if you are running the script as a test/learning with AWS free tier: 1. check the doc here to make sure you only used the services (especially instance) covered by AWS free tier2. Dont repeat the data generation process, as S3 charged by the number of read/write.3. You can start with a much smaller set of users by set NUM_USERS = 100
!python --version bucket = sagemaker.Session().default_bucket() prefix = "sagemaker/ipinsights-tutorial-bwx" execution_role = sagemaker.get_execution_role() region = boto3.Session().region_name boto3.Session().client("s3").head_bucket(Bucket=bucket) s3://{bucket}/{prefix}") import boto3 import botocore import os import sagemaker bucket = sagemaker.Session().default_bucket() prefix = "sagemaker/ipinsights-tutorial-bwx" execution_role = sagemaker.get_execution_role() region = boto3.Session().region_name # check if the bucket exists try: boto3.Session().client("s3").head_bucket(Bucket=bucket) except botocore.exceptions.ParamValidationError as e: print( "Hey! You either forgot to specify your S3 bucket or you gave your bucket an invalid name!" ) except botocore.exceptions.ClientError as e: if e.response["Error"]["Code"] == "403": print(f"Hey! You don't have permission to access the bucket, {bucket}.") elif e.response["Error"]["Code"] == "404": print(f"Hey! Your bucket, {bucket}, doesn't exist!") else: raise else: print(f"Training input/output will be stored in: s3://{bucket}/{prefix}")
Training input/output will be stored in: s3://sagemaker-us-east-1-017681292549/sagemaker/ipinsights-tutorial-bwx
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
Next we download the modules necessary for synthetic data generation they do not exist.
from os import path tools_bucket = f"jumpstart-cache-prod-{region}" # Bucket containing the data generation module. tools_prefix = "1p-algorithms-assets/ip-insights" # Prefix for the data generation module s3 = boto3.client("s3") data_generation_file = "generate_data.py" # Synthetic data generation module script_parameters_file = "ip2asn-v4-u32.tsv.gz" if not path.exists(data_generation_file): print("downloaded {} to S3 bucket {}".format(data_generation_file, tools_prefix)) s3.download_file(tools_bucket, f"{tools_prefix}/{data_generation_file}", data_generation_file) if not path.exists(script_parameters_file): s3.download_file( tools_bucket, f"{tools_prefix}/{script_parameters_file}", script_parameters_file ) print("downloaded {} to S3 bucket {}".format(script_parameters_file, tools_prefix))
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
DatasetApache Web Server ("httpd") is the most popular web server used on the internet. And luckily for us, it logs all requests processed by the server - by default. If a web page requires HTTP authentication, the Apache Web Server will log the IP address and authenticated user name for each requested resource. The [access logs](https://httpd.apache.org/docs/2.4/logs.html) are typically on the server under the file `/var/log/httpd/access_log`. From the example log output below, we see which IP addresses each user has connected with:```192.168.1.100 - user1 [15/Oct/2018:18:58:32 +0000] "GET /login_success?userId=1 HTTP/1.1" 200 476 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"192.168.1.102 - user2 [15/Oct/2018:18:58:35 +0000] "GET /login_success?userId=2 HTTP/1.1" 200 - "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"...```If we want to train an algorithm to detect suspicious activity, this dataset is ideal for SageMaker IP Insights.First, we determine the resource we want to be analyzing (such as a login page or access to a protected file). Then, we construct a dataset containing the history of all past user interactions with the resource. We extract out each 'access event' from the log and store the corresponding user name and IP address in a headerless CSV file with two columns. The first column will contain the user identifier string, and the second will contain the IPv4 address in decimal-dot notation. ```user1, 192.168.1.100user2, 193.168.1.102...```As a side note, the dataset should include all access events. That means some `` pairs will be repeated. User Activity SimulationFor this example, we are going to simulate our own web-traffic logs. We mock up a toy website example and simulate users logging into the website from mobile devices. The details of the simulation are explained in the script [here](./generate_data.py).
from generate_data import generate_dataset # We simulate traffic for 10,000 users. This should yield about 3 million log lines (~700 MB). NUM_USERS = 10000 log_file = "ipinsights_web_traffic.log" generate_dataset(NUM_USERS, log_file) # Visualize a few log lines !head $log_file
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
Prepare the datasetNow that we have our logs, we need to transform them into a format that IP Insights can use. As we mentioned above, we need to:1. Choose the resource which we want to analyze users' history for2. Extract our users' usage history of IP addresses3. In addition, we want to separate our dataset into a training and test set. This will allow us to check for overfitting by evaluating our model on 'unseen' login events.For the rest of the notebook, we assume that the Apache Access Logs are in the Common Log Format as defined by the [Apache documentation](https://httpd.apache.org/docs/2.4/logs.htmlaccesslog). We start with reading the logs into a Pandas DataFrame for easy data exploration and pre-processing.
log_file = "ipinsights_web_traffic.log" import pandas as pd df = pd.read_csv( log_file, sep=" ", na_values="-", header=None, names=[ "ip_address", "rcf_id", "user", "timestamp", "time_zone", "request", "status", "size", "referer", "user_agent", ], ) df.head()
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
We convert the log timestamp strings into Python datetimes so that we can sort and compare the data more easily.
# Convert time stamps to DateTime objects df["timestamp"] = pd.to_datetime(df["timestamp"], format="[%d/%b/%Y:%H:%M:%S") df.head()
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
We also verify the time zones of all of the time stamps. If the log contains more than one time zone, we would need to standardize the timestamps.
# Check if they are all in the same timezone num_time_zones = len(df["time_zone"].unique()) num_time_zones
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
As we see above, there is only one value in the entire `time_zone` column. Therefore, all of the timestamps are in the same time zone, and we do not need to standardize them. We can skip the next cell and go to [1. Selecting a Resource](1.-Select-Resource).If there is more than one time_zone in your dataset, then we parse the timezone offset and update the corresponding datetime object. **Note:** The next cell takes about 5-10 minutes to run.
from datetime import datetime # from datetime import datetime import pytz def apply_timezone(row): tz = row[1] tz_offset = int(tz[:3]) * 60 # Hour offset tz_offset += int(tz[3:5]) # Minutes offset return row[0].replace(tzinfo=pytz.FixedOffset(tz_offset)) if num_time_zones > 1: df["timestamp"] = df[["timestamp", "time_zone"]].apply(apply_timezone, axis=1)
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
1. Select ResourceOur goal is to train an IP Insights algorithm to analyze the history of user logins such that we can predict how suspicious a login event is. In our simulated web server, the server logs a `GET` request to the `/login_success` page everytime a user successfully logs in. We filter our Apache logs for `GET` requests for `/login_success`. We also filter for requests that have a `status_code == 200`, to ensure that the page request was well formed. **Note:** every web server handles logins differently. For your dataset, determine which resource you will need to be analyzing to correctly frame this problem. Depending on your usecase, you may need to do more data exploration and preprocessing.
df.head() df = df[(df["request"].str.startswith("GET /login_success")) & (df["status"] == 200)]
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
2. Extract Users and IP addressNow that our DataFrame only includes log events for the resource we want to analyze, we extract the relevant fields to construct a IP Insights dataset.IP Insights takes in a headerless CSV file with two columns: an entity (username) ID string and the IPv4 address in decimal-dot notation. Fortunately, the Apache Web Server Access Logs output IP addresses and authentcated usernames in their own columns.**Note:** Each website handles user authentication differently. If the Access Log does not output an authenticated user, you could explore the website's query strings or work with your website developers on another solution.
df = df[["user", "ip_address", "timestamp"]]
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
3. Create training and test datasetAs part of training a model, we want to evaluate how it generalizes to data it has never seen before.Typically, you create a test set by reserving a random percentage of your dataset and evaluating the model after training. However, for machine learning models that make future predictions on historical data, we want to use out-of-time testing. Instead of randomly sampling our dataset, we split our dataset into two contiguous time windows. The first window is the training set, and the second is the test set. We first look at the time range of our dataset to select a date to use as the partition between the training and test set.
df["timestamp"].describe()
/usr/local/lib/python3.6/site-packages/ipykernel_launcher.py:1: FutureWarning: Treating datetime data as categorical rather than numeric in `.describe` is deprecated and will be removed in a future version of pandas. Specify `datetime_is_numeric=True` to silence this warning and adopt the future behavior now. """Entry point for launching an IPython kernel.
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
We have login events for 10 days. Let's take the first week (7 days) of data as training and then use the last 3 days for the test set.
time_partition = ( datetime(2018, 11, 11, tzinfo=pytz.FixedOffset(0)) if num_time_zones > 1 else datetime(2018, 11, 11) ) train_df = df[df["timestamp"] <= time_partition] test_df = df[df["timestamp"] > time_partition]
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
Now that we have our training dataset, we shuffle it. Shuffling improves the model's performance since SageMaker IP Insights uses stochastic gradient descent. This ensures that login events for the same user are less likely to occur in the same mini batch. This allows the model to improve its performance in between predictions of the same user, which will improve training convergence.
# Shuffle train data train_df = train_df.sample(frac=1) train_df.head() # check the shape of training and testing print(train_df.shape, test_df.shape)
(2107898, 3) (904736, 3)
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
Store Data on S3 Now that we have simulated (or scraped) our datasets, we have to prepare and upload it to S3.We will be doing local inference, therefore we don't need to upload our test dataset.
# Output dataset as headerless CSV train_data = train_df.to_csv(index=False, header=False, columns=["user", "ip_address"]) re train_data_file = "train.csv" key = os.path.join(prefix, "train", train_data_file) s3_train_data = f"s3://{bucket}/{key}" print(f"Uploading data to: {s3_train_data}") boto3.resource("s3").Bucket(bucket).Object(key).put(Body=train_data) # Configure SageMaker IP Insights Input Channels input_data = { "train": sagemaker.session.s3_input( s3_train_data, distribution="FullyReplicated", content_type="text/csv" ) }
Uploading data to: s3://sagemaker-us-east-1-017681292549/sagemaker/ipinsights-tutorial-bwx/train/train.csv
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
Training---Once the data is preprocessed and available in the necessary format, the next step is to train our model on the data. There are number of parameters required by the SageMaker IP Insights algorithm to configure the model and define the computational environment in which training will take place. The first of these is to point to a container image which holds the algorithms training and hosting code:
from sagemaker.amazon.amazon_estimator import get_image_uri image = get_image_uri(boto3.Session().region_name, "ipinsights")
The method get_image_uri has been renamed in sagemaker>=2. See: https://sagemaker.readthedocs.io/en/stable/v2.html for details. Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
Then, we need to determine the training cluster to use. The IP Insights algorithm supports both CPU and GPU training. We recommend using GPU machines as they will train faster. However, when the size of your dataset increases, it can become more economical to use multiple CPU machines running with distributed training. See [Recommended Instance Types](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights.htmlip-insights-instances) for more details. Training Job Configuration- **train_instance_type**: the instance type to train on. We recommend `p3.2xlarge` for single GPU, `p3.8xlarge` for multi-GPU, and `m5.2xlarge` if using distributed training with CPU;- **train_instance_count**: the number of worker nodes in the training cluster.We need to also configure SageMaker IP Insights-specific hypeparameters: Model Hyperparameters- **num_entity_vectors**: the total number of embeddings to train. We use an internal hashing mechanism to map the entity ID strings to an embedding index; therefore, using an embedding size larger than the total number of possible values helps reduce the number of hash collisions. We recommend this value to be 2x the total number of unique entites (i.e. user names) in your dataset;- **vector_dim**: the size of the entity and IP embedding vectors. The larger the value, the more information can be encoded using these representations but using too large vector representations may cause the model to overfit, especially for small training data sets;- **num_ip_encoder_layers**: the number of layers in the IP encoder network. The larger the number of layers, the higher the model capacity to capture patterns among IP addresses. However, large number of layers increases the chance of overfitting. `num_ip_encoder_layers=1` is a good value to start experimenting with;- **random_negative_sampling_rate**: the number of randomly generated negative samples to produce per 1 positive sample; `random_negative_sampling_rate=1` is a good value to start experimenting with; - Random negative samples are produced by drawing each octet from a uniform distributed of [0, 255];- **shuffled_negative_sampling_rate**: the number of shuffled negative samples to produce per 1 positive sample; `shuffled_negative_sampling_rate=1` is a good value to start experimenting with; - Shuffled negative samples are produced by shuffling the accounts within a batch; Training Hyperparameters- **epochs**: the number of epochs to train. Increase this value if you continue to see the accuracy and cross entropy improving over the last few epochs;- **mini_batch_size**: how many examples in each mini_batch. A smaller number improves convergence with stochastic gradient descent. But a larger number is necessary if using shuffled_negative_sampling to avoid sampling a wrong account for a negative sample;- **learning_rate**: the learning rate for the Adam optimizer (try ranges in [0.001, 0.1]). Too large learning rate may cause the model to diverge since the training would be likely to overshoot minima. On the other hand, too small learning rate slows down the convergence;- **weight_decay**: L2 regularization coefficient. Regularization is required to prevent the model from overfitting the training data. Too large of a value will prevent the model from learning anything;For more details, see [Amazon SageMaker IP Insights (Hyperparameters)](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights-hyperparameters.html). Additionally, most of these hyperparameters can be found using SageMaker Automatic Model Tuning; see [Amazon SageMaker IP Insights (Model Tuning)](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights-tuning.html) for more details.
# Set up the estimator with training job configuration ip_insights = sagemaker.estimator.Estimator( image, execution_role, instance_count=1, #instance_type="ml.p3.2xlarge", instance_type = 'ml.m5.xlarge', output_path=f"s3://{bucket}/{prefix}/output", sagemaker_session=sagemaker.Session(), ) # Configure algorithm-specific hyperparameters ip_insights.set_hyperparameters( num_entity_vectors="20000", random_negative_sampling_rate="5", vector_dim="128", mini_batch_size="1000", epochs="5", learning_rate="0.01", ) # Start the training job (should take about ~1.5 minute / epoch to complete) ip_insights.fit(input_data)
2021-12-30 01:57:12 Starting - Starting the training job... 2021-12-30 01:57:36 Starting - Launching requested ML instancesProfilerReport-1640829432: InProgress ...... 2021-12-30 01:58:36 Starting - Preparing the instances for training...... 2021-12-30 01:59:38 Downloading - Downloading input data... 2021-12-30 01:59:56 Training - Downloading the training image..... 2021-12-30 02:00:57 Training - Training image download completed. Training in progress.Docker entrypoint called with argument(s): train Running default environment configuration script [12/30/2021 02:00:58 INFO 140515720472384] Reading default configuration from /opt/amazon/lib/python3.7/site-packages/algorithm/resources/default-input.json: {'batch_metrics_publish_interval': '1000', 'epochs': '10', 'learning_rate': '0.001', 'mini_batch_size': '5000', 'num_entity_vectors': '100000', 'num_ip_encoder_layers': '1', 'random_negative_sampling_rate': '1', 'shuffled_negative_sampling_rate': '1', 'vector_dim': '128', 'weight_decay': '0.00001', '_kvstore': 'auto_gpu', '_log_level': 'info', '_num_gpus': 'auto', '_num_kv_servers': 'auto', '_tuning_objective_metric': ''} [12/30/2021 02:00:58 INFO 140515720472384] Merging with provided configuration from /opt/ml/input/config/hyperparameters.json: {'vector_dim': '128', 'random_negative_sampling_rate': '5', 'num_entity_vectors': '20000', 'epochs': '5', 'learning_rate': '0.01', 'mini_batch_size': '1000'} [12/30/2021 02:00:58 INFO 140515720472384] Final configuration: {'batch_metrics_publish_interval': '1000', 'epochs': '5', 'learning_rate': '0.01', 'mini_batch_size': '1000', 'num_entity_vectors': '20000', 'num_ip_encoder_layers': '1', 'random_negative_sampling_rate': '5', 'shuffled_negative_sampling_rate': '1', 'vector_dim': '128', 'weight_decay': '0.00001', '_kvstore': 'auto_gpu', '_log_level': 'info', '_num_gpus': 'auto', '_num_kv_servers': 'auto', '_tuning_objective_metric': ''} [12/30/2021 02:00:58 WARNING 140515720472384] Loggers have already been setup. [12/30/2021 02:00:58 INFO 140515720472384] nvidia-smi: took 0.057 seconds to run. [12/30/2021 02:00:58 INFO 140515720472384] nvidia-smi identified 0 GPUs. Process 1 is a worker. [12/30/2021 02:00:58 INFO 140515720472384] Using default worker. [12/30/2021 02:00:58 INFO 140515720472384] Loaded iterator creator application/x-ndarray for content type ('application/x-ndarray', '1.0') [12/30/2021 02:00:58 INFO 140515720472384] Loaded iterator creator text/csv for content type ('text/csv', '1.0') [12/30/2021 02:00:58 INFO 140515720472384] Checkpoint loading and saving are disabled. [12/30/2021 02:00:58 INFO 140515720472384] Number of GPUs being used: 0 #metrics {"StartTime": 1640829658.5182247, "EndTime": 1640829658.5206978, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training"}, "Metrics": {"initialize.time": {"sum": 2.297639846801758, "count": 1, "min": 2.297639846801758, "max": 2.297639846801758}}} #metrics {"StartTime": 1640829658.5207944, "EndTime": 1640829658.520819, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training", "Meta": "init_train_data_iter"}, "Metrics": {"Total Records Seen": {"sum": 0.0, "count": 1, "min": 0, "max": 0}, "Total Batches Seen": {"sum": 0.0, "count": 1, "min": 0, "max": 0}, "Max Records Seen Between Resets": {"sum": 0.0, "count": 1, "min": 0, "max": 0}, "Max Batches Seen Between Resets": {"sum": 0.0, "count": 1, "min": 0, "max": 0}, "Reset Count": {"sum": 0.0, "count": 1, "min": 0, "max": 0}, "Number of Records Since Last Reset": {"sum": 0.0, "count": 1, "min": 0, "max": 0}, "Number of Batches Since Last Reset": {"sum": 0.0, "count": 1, "min": 0, "max": 0}}} [12/30/2021 02:00:58 INFO 140515720472384] Create Store: local [02:00:58] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.3.x_Cuda_10.1.x.3320.0/AL2_x86_64/generic-flavor/src/src/kvstore/./kvstore_local.h:306: Warning: non-default weights detected during kvstore pull. This call has been ignored. Please make sure to use kv.row_sparse_pull() or module.prepare() with row_ids. [02:00:58] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.3.x_Cuda_10.1.x.3320.0/AL2_x86_64/generic-flavor/src/src/operator/././../common/utils.h:450: Optimizer with lazy_update = True detected. Be aware that lazy update with row_sparse gradient is different from standard update, and may lead to different empirical results. See https://mxnet.incubator.apache.org/api/python/optimization/optimization.html for more details. [12/30/2021 02:00:58 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=0 train binary_classification_accuracy <score>=0.497 [12/30/2021 02:00:58 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=0 train binary_classification_cross_entropy <loss>=0.6931654663085938 [12/30/2021 02:01:06 INFO 140515720472384] Epoch[0] Batch [1000]#011Speed: 133941.46 samples/sec#011binary_classification_accuracy=0.927374#011binary_classification_cross_entropy=0.198942 [12/30/2021 02:01:06 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=1000 train binary_classification_accuracy <score>=0.9273736263736264 [12/30/2021 02:01:06 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=1000 train binary_classification_cross_entropy <loss>=0.19894166549697861 [12/30/2021 02:01:13 INFO 140515720472384] Epoch[0] Batch [2000]#011Speed: 141583.24 samples/sec#011binary_classification_accuracy=0.949645#011binary_classification_cross_entropy=0.148631 [12/30/2021 02:01:13 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=2000 train binary_classification_accuracy <score>=0.9496446776611694 [12/30/2021 02:01:13 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=2000 train binary_classification_cross_entropy <loss>=0.14863135603867073 [12/30/2021 02:01:20 INFO 140515720472384] Epoch[0] Batch [3000]#011Speed: 141008.86 samples/sec#011binary_classification_accuracy=0.959285#011binary_classification_cross_entropy=0.125976 [12/30/2021 02:01:20 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=3000 train binary_classification_accuracy <score>=0.9592845718093969 [12/30/2021 02:01:20 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=3000 train binary_classification_cross_entropy <loss>=0.1259763848591709 [12/30/2021 02:01:27 INFO 140515720472384] Epoch[0] Batch [4000]#011Speed: 140482.40 samples/sec#011binary_classification_accuracy=0.964791#011binary_classification_cross_entropy=0.112997 [12/30/2021 02:01:27 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=4000 train binary_classification_accuracy <score>=0.9647908022994252 [12/30/2021 02:01:27 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=4000 train binary_classification_cross_entropy <loss>=0.11299730495404256 [12/30/2021 02:01:34 INFO 140515720472384] Epoch[0] Batch [5000]#011Speed: 141370.50 samples/sec#011binary_classification_accuracy=0.968326#011binary_classification_cross_entropy=0.104571 [12/30/2021 02:01:34 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=5000 train binary_classification_accuracy <score>=0.9683259348130374 [12/30/2021 02:01:34 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=5000 train binary_classification_cross_entropy <loss>=0.10457097097731333 [12/30/2021 02:01:41 INFO 140515720472384] Epoch[0] Batch [6000]#011Speed: 140469.37 samples/sec#011binary_classification_accuracy=0.970816#011binary_classification_cross_entropy=0.098655 [12/30/2021 02:01:41 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=6000 train binary_classification_accuracy <score>=0.9708161973004499 [12/30/2021 02:01:41 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=6000 train binary_classification_cross_entropy <loss>=0.09865453168325197 [12/30/2021 02:01:48 INFO 140515720472384] Epoch[0] Batch [7000]#011Speed: 139782.67 samples/sec#011binary_classification_accuracy=0.972723#011binary_classification_cross_entropy=0.094058 [12/30/2021 02:01:48 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=7000 train binary_classification_accuracy <score>=0.9727233252392515 [12/30/2021 02:01:48 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=7000 train binary_classification_cross_entropy <loss>=0.09405811343460727 [12/30/2021 02:01:55 INFO 140515720472384] Epoch[0] Batch [8000]#011Speed: 140263.31 samples/sec#011binary_classification_accuracy=0.974187#011binary_classification_cross_entropy=0.090596 [12/30/2021 02:01:55 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=8000 train binary_classification_accuracy <score>=0.9741866016747907 [12/30/2021 02:01:55 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=8000 train binary_classification_cross_entropy <loss>=0.09059624196588807 [12/30/2021 02:02:03 INFO 140515720472384] Epoch[0] Batch [9000]#011Speed: 139964.99 samples/sec#011binary_classification_accuracy=0.975349#011binary_classification_cross_entropy=0.087753 [12/30/2021 02:02:03 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=9000 train binary_classification_accuracy <score>=0.9753492945228308 [12/30/2021 02:02:03 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=9000 train binary_classification_cross_entropy <loss>=0.08775326167614032 [12/30/2021 02:02:10 INFO 140515720472384] Epoch[0] Batch [10000]#011Speed: 137405.33 samples/sec#011binary_classification_accuracy=0.976317#011binary_classification_cross_entropy=0.085435 [12/30/2021 02:02:10 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=10000 train binary_classification_accuracy <score>=0.9763169683031697 [12/30/2021 02:02:10 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=10000 train binary_classification_cross_entropy <loss>=0.08543506357543719 [12/30/2021 02:02:17 INFO 140515720472384] Epoch[0] Batch [11000]#011Speed: 140676.57 samples/sec#011binary_classification_accuracy=0.977119#011binary_classification_cross_entropy=0.083455 [12/30/2021 02:02:17 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=11000 train binary_classification_accuracy <score>=0.9771192618852832 [12/30/2021 02:02:17 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=11000 train binary_classification_cross_entropy <loss>=0.0834549344122361 [12/30/2021 02:02:24 INFO 140515720472384] Epoch[0] Batch [12000]#011Speed: 140612.77 samples/sec#011binary_classification_accuracy=0.977806#011binary_classification_cross_entropy=0.081764 [12/30/2021 02:02:24 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=12000 train binary_classification_accuracy <score>=0.9778056828597617 [12/30/2021 02:02:24 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=12000 train binary_classification_cross_entropy <loss>=0.08176383952364982 [12/30/2021 02:02:31 INFO 140515720472384] Epoch[0] Batch [13000]#011Speed: 141415.17 samples/sec#011binary_classification_accuracy=0.978406#011binary_classification_cross_entropy=0.080288 [12/30/2021 02:02:31 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=13000 train binary_classification_accuracy <score>=0.9784055072686716 [12/30/2021 02:02:31 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=13000 train binary_classification_cross_entropy <loss>=0.08028770379101018 [12/30/2021 02:02:38 INFO 140515720472384] Epoch[0] Batch [14000]#011Speed: 141183.18 samples/sec#011binary_classification_accuracy=0.978931#011binary_classification_cross_entropy=0.078995 [12/30/2021 02:02:38 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=14000 train binary_classification_accuracy <score>=0.9789310763516892 [12/30/2021 02:02:38 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, batch=14000 train binary_classification_cross_entropy <loss>=0.07899502835799248 [12/30/2021 02:02:44 INFO 140515720472384] Epoch[0] Train-binary_classification_accuracy=0.979284 [12/30/2021 02:02:44 INFO 140515720472384] Epoch[0] Train-binary_classification_cross_entropy=0.078152 [12/30/2021 02:02:44 INFO 140515720472384] Epoch[0] Time cost=105.479 [12/30/2021 02:02:44 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, train binary_classification_accuracy <score>=0.9792840878286798 [12/30/2021 02:02:44 INFO 140515720472384] #quality_metric: host=algo-1, epoch=0, train binary_classification_cross_entropy <loss>=0.07815158404086334 #metrics {"StartTime": 1640829658.5207527, "EndTime": 1640829764.0200803, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training"}, "Metrics": {"epochs": {"sum": 5.0, "count": 1, "min": 5, "max": 5}, "update.time": {"sum": 105499.11952018738, "count": 1, "min": 105499.11952018738, "max": 105499.11952018738}}} [12/30/2021 02:02:44 INFO 140515720472384] #progress_metric: host=algo-1, completed 20.0 % of epochs #metrics {"StartTime": 1640829658.5209367, "EndTime": 1640829764.0203376, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training", "epoch": 0, "Meta": "training_data_iter"}, "Metrics": {"Total Records Seen": {"sum": 14755286.0, "count": 1, "min": 14755286, "max": 14755286}, "Total Batches Seen": {"sum": 14756.0, "count": 1, "min": 14756, "max": 14756}, "Max Records Seen Between Resets": {"sum": 14755286.0, "count": 1, "min": 14755286, "max": 14755286}, "Max Batches Seen Between Resets": {"sum": 14756.0, "count": 1, "min": 14756, "max": 14756}, "Reset Count": {"sum": 2.0, "count": 1, "min": 2, "max": 2}, "Number of Records Since Last Reset": {"sum": 0.0, "count": 1, "min": 0, "max": 0}, "Number of Batches Since Last Reset": {"sum": 0.0, "count": 1, "min": 0, "max": 0}}} [12/30/2021 02:02:44 INFO 140515720472384] #throughput_metric: host=algo-1, train throughput=139861.20214086538 records/second [12/30/2021 02:02:44 WARNING 140515720472384] Already bound, ignoring bind() /opt/amazon/lib/python3.7/site-packages/mxnet/module/base_module.py:502: UserWarning: Parameters already initialized and force_init=False. init_params call ignored. allow_missing=allow_missing, force_init=force_init) [12/30/2021 02:02:44 WARNING 140515720472384] optimizer already initialized, ignoring... [12/30/2021 02:02:44 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=0 train binary_classification_accuracy <score>=0.986 [12/30/2021 02:02:44 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=0 train binary_classification_cross_entropy <loss>=0.06770654296875 [12/30/2021 02:02:51 INFO 140515720472384] Epoch[1] Batch [1000]#011Speed: 137869.01 samples/sec#011binary_classification_accuracy=0.985843#011binary_classification_cross_entropy=0.061067 [12/30/2021 02:02:51 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=1000 train binary_classification_accuracy <score>=0.9858431568431568 [12/30/2021 02:02:51 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=1000 train binary_classification_cross_entropy <loss>=0.06106659011764603 [12/30/2021 02:02:58 INFO 140515720472384] Epoch[1] Batch [2000]#011Speed: 138088.05 samples/sec#011binary_classification_accuracy=0.985929#011binary_classification_cross_entropy=0.060214 [12/30/2021 02:02:58 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=2000 train binary_classification_accuracy <score>=0.9859290354822589 [12/30/2021 02:02:58 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=2000 train binary_classification_cross_entropy <loss>=0.060213889249261174 [12/30/2021 02:03:05 INFO 140515720472384] Epoch[1] Batch [3000]#011Speed: 134430.95 samples/sec#011binary_classification_accuracy=0.985973#011binary_classification_cross_entropy=0.059813 [12/30/2021 02:03:05 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=3000 train binary_classification_accuracy <score>=0.9859733422192603 [12/30/2021 02:03:05 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=3000 train binary_classification_cross_entropy <loss>=0.05981333435856871 [12/30/2021 02:03:13 INFO 140515720472384] Epoch[1] Batch [4000]#011Speed: 138264.45 samples/sec#011binary_classification_accuracy=0.986013#011binary_classification_cross_entropy=0.059712 [12/30/2021 02:03:13 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=4000 train binary_classification_accuracy <score>=0.9860134966258436 [12/30/2021 02:03:13 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=4000 train binary_classification_cross_entropy <loss>=0.059712011716271066 [12/30/2021 02:03:20 INFO 140515720472384] Epoch[1] Batch [5000]#011Speed: 139038.87 samples/sec#011binary_classification_accuracy=0.986063#011binary_classification_cross_entropy=0.059499 [12/30/2021 02:03:20 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=5000 train binary_classification_accuracy <score>=0.9860631873625275 [12/30/2021 02:03:20 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=5000 train binary_classification_cross_entropy <loss>=0.05949896123680537 [12/30/2021 02:03:27 INFO 140515720472384] Epoch[1] Batch [6000]#011Speed: 140046.69 samples/sec#011binary_classification_accuracy=0.986035#011binary_classification_cross_entropy=0.059461 [12/30/2021 02:03:27 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=6000 train binary_classification_accuracy <score>=0.9860353274454258 [12/30/2021 02:03:27 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=6000 train binary_classification_cross_entropy <loss>=0.05946066460484684 [12/30/2021 02:03:34 INFO 140515720472384] Epoch[1] Batch [7000]#011Speed: 139521.35 samples/sec#011binary_classification_accuracy=0.986090#011binary_classification_cross_entropy=0.059331 [12/30/2021 02:03:34 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=7000 train binary_classification_accuracy <score>=0.9860897014712184 [12/30/2021 02:03:34 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=7000 train binary_classification_cross_entropy <loss>=0.0593311418572284 [12/30/2021 02:03:41 INFO 140515720472384] Epoch[1] Batch [8000]#011Speed: 139234.53 samples/sec#011binary_classification_accuracy=0.986091#011binary_classification_cross_entropy=0.059386 [12/30/2021 02:03:41 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=8000 train binary_classification_accuracy <score>=0.9860908636420448 [12/30/2021 02:03:41 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=8000 train binary_classification_cross_entropy <loss>=0.05938634492730874 [12/30/2021 02:03:49 INFO 140515720472384] Epoch[1] Batch [9000]#011Speed: 138099.91 samples/sec#011binary_classification_accuracy=0.986147#011binary_classification_cross_entropy=0.059270 [12/30/2021 02:03:49 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=9000 train binary_classification_accuracy <score>=0.9861468725697144 [12/30/2021 02:03:49 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=9000 train binary_classification_cross_entropy <loss>=0.05926991367901634 [12/30/2021 02:03:56 INFO 140515720472384] Epoch[1] Batch [10000]#011Speed: 139991.25 samples/sec#011binary_classification_accuracy=0.986175#011binary_classification_cross_entropy=0.059210 [12/30/2021 02:03:56 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=10000 train binary_classification_accuracy <score>=0.9861750824917508 [12/30/2021 02:03:56 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=10000 train binary_classification_cross_entropy <loss>=0.05921009378687833 [12/30/2021 02:04:03 INFO 140515720472384] Epoch[1] Batch [11000]#011Speed: 137535.00 samples/sec#011binary_classification_accuracy=0.986201#011binary_classification_cross_entropy=0.059122 [12/30/2021 02:04:03 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=11000 train binary_classification_accuracy <score>=0.9862011635305882 [12/30/2021 02:04:03 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=11000 train binary_classification_cross_entropy <loss>=0.05912183562990384 [12/30/2021 02:04:10 INFO 140515720472384] Epoch[1] Batch [12000]#011Speed: 136475.24 samples/sec#011binary_classification_accuracy=0.986238#011binary_classification_cross_entropy=0.059012 [12/30/2021 02:04:10 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=12000 train binary_classification_accuracy <score>=0.9862378135155404 [12/30/2021 02:04:10 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=12000 train binary_classification_cross_entropy <loss>=0.059012383505102295 [12/30/2021 02:04:18 INFO 140515720472384] Epoch[1] Batch [13000]#011Speed: 137785.26 samples/sec#011binary_classification_accuracy=0.986268#011binary_classification_cross_entropy=0.058932 [12/30/2021 02:04:18 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=13000 train binary_classification_accuracy <score>=0.9862682870548419 [12/30/2021 02:04:18 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=13000 train binary_classification_cross_entropy <loss>=0.05893236765990981 [12/30/2021 02:04:25 INFO 140515720472384] Epoch[1] Batch [14000]#011Speed: 138831.45 samples/sec#011binary_classification_accuracy=0.986298#011binary_classification_cross_entropy=0.058849 [12/30/2021 02:04:25 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=14000 train binary_classification_accuracy <score>=0.9862976215984572 [12/30/2021 02:04:25 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, batch=14000 train binary_classification_cross_entropy <loss>=0.058848739905882186 [12/30/2021 02:04:30 INFO 140515720472384] Epoch[1] Train-binary_classification_accuracy=0.986302 [12/30/2021 02:04:30 INFO 140515720472384] Epoch[1] Train-binary_classification_cross_entropy=0.058862 [12/30/2021 02:04:30 INFO 140515720472384] Epoch[1] Time cost=106.744 [12/30/2021 02:04:30 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, train binary_classification_accuracy <score>=0.9863016400108431 [12/30/2021 02:04:30 INFO 140515720472384] #quality_metric: host=algo-1, epoch=1, train binary_classification_cross_entropy <loss>=0.058862226865194 #metrics {"StartTime": 1640829764.020166, "EndTime": 1640829870.767687, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training"}, "Metrics": {"update.time": {"sum": 106747.10631370544, "count": 1, "min": 106747.10631370544, "max": 106747.10631370544}}} [12/30/2021 02:04:30 INFO 140515720472384] #progress_metric: host=algo-1, completed 40.0 % of epochs #metrics {"StartTime": 1640829764.0205572, "EndTime": 1640829870.7679281, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training", "epoch": 1, "Meta": "training_data_iter"}, "Metrics": {"Total Records Seen": {"sum": 29510572.0, "count": 1, "min": 29510572, "max": 29510572}, "Total Batches Seen": {"sum": 29512.0, "count": 1, "min": 29512, "max": 29512}, "Max Records Seen Between Resets": {"sum": 14755286.0, "count": 1, "min": 14755286, "max": 14755286}, "Max Batches Seen Between Resets": {"sum": 14756.0, "count": 1, "min": 14756, "max": 14756}, "Reset Count": {"sum": 4.0, "count": 1, "min": 4, "max": 4}, "Number of Records Since Last Reset": {"sum": 0.0, "count": 1, "min": 0, "max": 0}, "Number of Batches Since Last Reset": {"sum": 0.0, "count": 1, "min": 0, "max": 0}}} [12/30/2021 02:04:30 INFO 140515720472384] #throughput_metric: host=algo-1, train throughput=138226.0970247211 records/second [12/30/2021 02:04:30 WARNING 140515720472384] Already bound, ignoring bind() [12/30/2021 02:04:30 WARNING 140515720472384] optimizer already initialized, ignoring... [12/30/2021 02:04:30 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=0 train binary_classification_accuracy <score>=0.988 [12/30/2021 02:04:30 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=0 train binary_classification_cross_entropy <loss>=0.07839033508300781 [12/30/2021 02:04:37 INFO 140515720472384] Epoch[2] Batch [1000]#011Speed: 141477.97 samples/sec#011binary_classification_accuracy=0.986459#011binary_classification_cross_entropy=0.057827 [12/30/2021 02:04:37 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=1000 train binary_classification_accuracy <score>=0.9864585414585415 [12/30/2021 02:04:37 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=1000 train binary_classification_cross_entropy <loss>=0.057826903831946865 [12/30/2021 02:04:44 INFO 140515720472384] Epoch[2] Batch [2000]#011Speed: 140878.21 samples/sec#011binary_classification_accuracy=0.986550#011binary_classification_cross_entropy=0.057289 [12/30/2021 02:04:44 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=2000 train binary_classification_accuracy <score>=0.9865502248875562 [12/30/2021 02:04:44 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=2000 train binary_classification_cross_entropy <loss>=0.05728900220643157 [12/30/2021 02:04:52 INFO 140515720472384] Epoch[2] Batch [3000]#011Speed: 141401.02 samples/sec#011binary_classification_accuracy=0.986597#011binary_classification_cross_entropy=0.057021 [12/30/2021 02:04:52 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=3000 train binary_classification_accuracy <score>=0.9865968010663112 [12/30/2021 02:04:52 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=3000 train binary_classification_cross_entropy <loss>=0.05702117189587215 [12/30/2021 02:04:59 INFO 140515720472384] Epoch[2] Batch [4000]#011Speed: 142068.18 samples/sec#011binary_classification_accuracy=0.986629#011binary_classification_cross_entropy=0.057059 [12/30/2021 02:04:59 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=4000 train binary_classification_accuracy <score>=0.9866288427893026 [12/30/2021 02:04:59 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=4000 train binary_classification_cross_entropy <loss>=0.05705870486151961 [12/30/2021 02:05:06 INFO 140515720472384] Epoch[2] Batch [5000]#011Speed: 139169.14 samples/sec#011binary_classification_accuracy=0.986668#011binary_classification_cross_entropy=0.056867 [12/30/2021 02:05:06 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=5000 train binary_classification_accuracy <score>=0.9866680663867227 [12/30/2021 02:05:06 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=5000 train binary_classification_cross_entropy <loss>=0.05686684497130725 [12/30/2021 02:05:13 INFO 140515720472384] Epoch[2] Batch [6000]#011Speed: 142028.36 samples/sec#011binary_classification_accuracy=0.986652#011binary_classification_cross_entropy=0.056919 [12/30/2021 02:05:13 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=6000 train binary_classification_accuracy <score>=0.9866522246292284 [12/30/2021 02:05:13 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=6000 train binary_classification_cross_entropy <loss>=0.05691928373736315 [12/30/2021 02:05:20 INFO 140515720472384] Epoch[2] Batch [7000]#011Speed: 141145.98 samples/sec#011binary_classification_accuracy=0.986655#011binary_classification_cross_entropy=0.056879 [12/30/2021 02:05:20 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=7000 train binary_classification_accuracy <score>=0.9866546207684617 [12/30/2021 02:05:20 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=7000 train binary_classification_cross_entropy <loss>=0.05687941269642318 [12/30/2021 02:05:27 INFO 140515720472384] Epoch[2] Batch [8000]#011Speed: 142364.00 samples/sec#011binary_classification_accuracy=0.986653#011binary_classification_cross_entropy=0.056985 [12/30/2021 02:05:27 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=8000 train binary_classification_accuracy <score>=0.9866527934008249 [12/30/2021 02:05:27 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=8000 train binary_classification_cross_entropy <loss>=0.05698486019459207 [12/30/2021 02:05:34 INFO 140515720472384] Epoch[2] Batch [9000]#011Speed: 143501.99 samples/sec#011binary_classification_accuracy=0.986676#011binary_classification_cross_entropy=0.056935 [12/30/2021 02:05:34 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=9000 train binary_classification_accuracy <score>=0.9866764803910677 [12/30/2021 02:05:34 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=9000 train binary_classification_cross_entropy <loss>=0.05693531916690077 [12/30/2021 02:05:41 INFO 140515720472384] Epoch[2] Batch [10000]#011Speed: 142939.11 samples/sec#011binary_classification_accuracy=0.986684#011binary_classification_cross_entropy=0.056919 [12/30/2021 02:05:41 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=10000 train binary_classification_accuracy <score>=0.9866839316068393 [12/30/2021 02:05:41 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=10000 train binary_classification_cross_entropy <loss>=0.05691869504837236 [12/30/2021 02:05:48 INFO 140515720472384] Epoch[2] Batch [11000]#011Speed: 141513.56 samples/sec#011binary_classification_accuracy=0.986686#011binary_classification_cross_entropy=0.056875 [12/30/2021 02:05:48 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=11000 train binary_classification_accuracy <score>=0.9866860285428597 [12/30/2021 02:05:48 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=11000 train binary_classification_cross_entropy <loss>=0.05687489840512102 [12/30/2021 02:05:55 INFO 140515720472384] Epoch[2] Batch [12000]#011Speed: 142461.80 samples/sec#011binary_classification_accuracy=0.986704#011binary_classification_cross_entropy=0.056792 [12/30/2021 02:05:55 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=12000 train binary_classification_accuracy <score>=0.9867041079910007 [12/30/2021 02:05:55 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=12000 train binary_classification_cross_entropy <loss>=0.05679224156288393 [12/30/2021 02:06:02 INFO 140515720472384] Epoch[2] Batch [13000]#011Speed: 141830.86 samples/sec#011binary_classification_accuracy=0.986717#011binary_classification_cross_entropy=0.056760 [12/30/2021 02:06:02 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=13000 train binary_classification_accuracy <score>=0.9867174063533575 [12/30/2021 02:06:02 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=13000 train binary_classification_cross_entropy <loss>=0.05676031053166199 [12/30/2021 02:06:09 INFO 140515720472384] Epoch[2] Batch [14000]#011Speed: 138937.32 samples/sec#011binary_classification_accuracy=0.986730#011binary_classification_cross_entropy=0.056719 [12/30/2021 02:06:09 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=14000 train binary_classification_accuracy <score>=0.9867301621312763 [12/30/2021 02:06:09 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, batch=14000 train binary_classification_cross_entropy <loss>=0.05671881425671046 [12/30/2021 02:06:15 INFO 140515720472384] Epoch[2] Train-binary_classification_accuracy=0.986727 [12/30/2021 02:06:15 INFO 140515720472384] Epoch[2] Train-binary_classification_cross_entropy=0.056709 [12/30/2021 02:06:15 INFO 140515720472384] Epoch[2] Time cost=104.254 [12/30/2021 02:06:15 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, train binary_classification_accuracy <score>=0.9867270262943887 [12/30/2021 02:06:15 INFO 140515720472384] #quality_metric: host=algo-1, epoch=2, train binary_classification_cross_entropy <loss>=0.05670896640693828 #metrics {"StartTime": 1640829870.767744, "EndTime": 1640829975.0252588, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training"}, "Metrics": {"update.time": {"sum": 104257.09104537964, "count": 1, "min": 104257.09104537964, "max": 104257.09104537964}}} [12/30/2021 02:06:15 INFO 140515720472384] #progress_metric: host=algo-1, completed 60.0 % of epochs #metrics {"StartTime": 1640829870.768142, "EndTime": 1640829975.02551, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training", "epoch": 2, "Meta": "training_data_iter"}, "Metrics": {"Total Records Seen": {"sum": 44265858.0, "count": 1, "min": 44265858, "max": 44265858}, "Total Batches Seen": {"sum": 44268.0, "count": 1, "min": 44268, "max": 44268}, "Max Records Seen Between Resets": {"sum": 14755286.0, "count": 1, "min": 14755286, "max": 14755286}, "Max Batches Seen Between Resets": {"sum": 14756.0, "count": 1, "min": 14756, "max": 14756}, "Reset Count": {"sum": 6.0, "count": 1, "min": 6, "max": 6}, "Number of Records Since Last Reset": {"sum": 0.0, "count": 1, "min": 0, "max": 0}, "Number of Batches Since Last Reset": {"sum": 0.0, "count": 1, "min": 0, "max": 0}}} [12/30/2021 02:06:15 INFO 140515720472384] #throughput_metric: host=algo-1, train throughput=141527.3668676963 records/second [12/30/2021 02:06:15 WARNING 140515720472384] Already bound, ignoring bind() [12/30/2021 02:06:15 WARNING 140515720472384] optimizer already initialized, ignoring... [12/30/2021 02:06:15 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=0 train binary_classification_accuracy <score>=0.985 [12/30/2021 02:06:15 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=0 train binary_classification_cross_entropy <loss>=0.06285255432128906 [12/30/2021 02:06:22 INFO 140515720472384] Epoch[3] Batch [1000]#011Speed: 138178.71 samples/sec#011binary_classification_accuracy=0.986863#011binary_classification_cross_entropy=0.056253 [12/30/2021 02:06:22 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=1000 train binary_classification_accuracy <score>=0.9868631368631369 [12/30/2021 02:06:22 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=1000 train binary_classification_cross_entropy <loss>=0.056252914421089165 [12/30/2021 02:06:29 INFO 140515720472384] Epoch[3] Batch [2000]#011Speed: 139114.04 samples/sec#011binary_classification_accuracy=0.986953#011binary_classification_cross_entropy=0.055874 [12/30/2021 02:06:29 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=2000 train binary_classification_accuracy <score>=0.9869530234882559 [12/30/2021 02:06:29 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=2000 train binary_classification_cross_entropy <loss>=0.05587372519432575 [12/30/2021 02:06:36 INFO 140515720472384] Epoch[3] Batch [3000]#011Speed: 138902.99 samples/sec#011binary_classification_accuracy=0.986919#011binary_classification_cross_entropy=0.055635 [12/30/2021 02:06:36 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=3000 train binary_classification_accuracy <score>=0.9869193602132622 [12/30/2021 02:06:36 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=3000 train binary_classification_cross_entropy <loss>=0.05563474729258948 [12/30/2021 02:06:43 INFO 140515720472384] Epoch[3] Batch [4000]#011Speed: 139388.36 samples/sec#011binary_classification_accuracy=0.986915#011binary_classification_cross_entropy=0.055746 [12/30/2021 02:06:43 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=4000 train binary_classification_accuracy <score>=0.9869150212446888 [12/30/2021 02:06:43 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=4000 train binary_classification_cross_entropy <loss>=0.05574583171099849 [12/30/2021 02:06:51 INFO 140515720472384] Epoch[3] Batch [5000]#011Speed: 139210.37 samples/sec#011binary_classification_accuracy=0.986921#011binary_classification_cross_entropy=0.055666 [12/30/2021 02:06:51 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=5000 train binary_classification_accuracy <score>=0.9869214157168567 [12/30/2021 02:06:51 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=5000 train binary_classification_cross_entropy <loss>=0.05566637403186477 [12/30/2021 02:06:58 INFO 140515720472384] Epoch[3] Batch [6000]#011Speed: 139251.37 samples/sec#011binary_classification_accuracy=0.986899#011binary_classification_cross_entropy=0.055709 [12/30/2021 02:06:58 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=6000 train binary_classification_accuracy <score>=0.9868988501916347 [12/30/2021 02:06:58 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=6000 train binary_classification_cross_entropy <loss>=0.05570902763440438 [12/30/2021 02:07:05 INFO 140515720472384] Epoch[3] Batch [7000]#011Speed: 135635.32 samples/sec#011binary_classification_accuracy=0.986920#011binary_classification_cross_entropy=0.055675 [12/30/2021 02:07:05 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=7000 train binary_classification_accuracy <score>=0.9869197257534638 [12/30/2021 02:07:05 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=7000 train binary_classification_cross_entropy <loss>=0.055674678436059166 [12/30/2021 02:07:12 INFO 140515720472384] Epoch[3] Batch [8000]#011Speed: 139145.19 samples/sec#011binary_classification_accuracy=0.986905#011binary_classification_cross_entropy=0.055763 [12/30/2021 02:07:12 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=8000 train binary_classification_accuracy <score>=0.986904511936008 [12/30/2021 02:07:12 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=8000 train binary_classification_cross_entropy <loss>=0.05576308773237681 [12/30/2021 02:07:19 INFO 140515720472384] Epoch[3] Batch [9000]#011Speed: 139395.14 samples/sec#011binary_classification_accuracy=0.986909#011binary_classification_cross_entropy=0.055737 [12/30/2021 02:07:19 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=9000 train binary_classification_accuracy <score>=0.9869087879124542 [12/30/2021 02:07:19 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=9000 train binary_classification_cross_entropy <loss>=0.05573662098809463 [12/30/2021 02:07:27 INFO 140515720472384] Epoch[3] Batch [10000]#011Speed: 138785.02 samples/sec#011binary_classification_accuracy=0.986902#011binary_classification_cross_entropy=0.055714 [12/30/2021 02:07:27 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=10000 train binary_classification_accuracy <score>=0.9869022097790221 [12/30/2021 02:07:27 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=10000 train binary_classification_cross_entropy <loss>=0.055714041816605386 [12/30/2021 02:07:34 INFO 140515720472384] Epoch[3] Batch [11000]#011Speed: 140220.52 samples/sec#011binary_classification_accuracy=0.986906#011binary_classification_cross_entropy=0.055653 [12/30/2021 02:07:34 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=11000 train binary_classification_accuracy <score>=0.986906099445505 [12/30/2021 02:07:34 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=11000 train binary_classification_cross_entropy <loss>=0.05565311696409973 [12/30/2021 02:07:41 INFO 140515720472384] Epoch[3] Batch [12000]#011Speed: 139171.91 samples/sec#011binary_classification_accuracy=0.986919#011binary_classification_cross_entropy=0.055594 [12/30/2021 02:07:41 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=12000 train binary_classification_accuracy <score>=0.9869189234230481 [12/30/2021 02:07:41 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=12000 train binary_classification_cross_entropy <loss>=0.05559404292510714 [12/30/2021 02:07:48 INFO 140515720472384] Epoch[3] Batch [13000]#011Speed: 138648.90 samples/sec#011binary_classification_accuracy=0.986931#011binary_classification_cross_entropy=0.055534 [12/30/2021 02:07:48 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=13000 train binary_classification_accuracy <score>=0.9869309283901239 [12/30/2021 02:07:48 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=13000 train binary_classification_cross_entropy <loss>=0.0555342212483641 [12/30/2021 02:07:55 INFO 140515720472384] Epoch[3] Batch [14000]#011Speed: 139019.88 samples/sec#011binary_classification_accuracy=0.986932#011binary_classification_cross_entropy=0.055522 [12/30/2021 02:07:55 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=14000 train binary_classification_accuracy <score>=0.9869316477394472 [12/30/2021 02:07:55 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, batch=14000 train binary_classification_cross_entropy <loss>=0.05552226170437071 [12/30/2021 02:08:01 INFO 140515720472384] Epoch[3] Train-binary_classification_accuracy=0.986934 [12/30/2021 02:08:01 INFO 140515720472384] Epoch[3] Train-binary_classification_cross_entropy=0.055513 [12/30/2021 02:08:01 INFO 140515720472384] Epoch[3] Time cost=106.276 [12/30/2021 02:08:01 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, train binary_classification_accuracy <score>=0.9869339251829764 [12/30/2021 02:08:01 INFO 140515720472384] #quality_metric: host=algo-1, epoch=3, train binary_classification_cross_entropy <loss>=0.05551349886480338 #metrics {"StartTime": 1640829975.0253398, "EndTime": 1640830081.3048966, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training"}, "Metrics": {"update.time": {"sum": 106279.15692329407, "count": 1, "min": 106279.15692329407, "max": 106279.15692329407}}} [12/30/2021 02:08:01 INFO 140515720472384] #progress_metric: host=algo-1, completed 80.0 % of epochs #metrics {"StartTime": 1640829975.0257154, "EndTime": 1640830081.305113, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training", "epoch": 3, "Meta": "training_data_iter"}, "Metrics": {"Total Records Seen": {"sum": 59021144.0, "count": 1, "min": 59021144, "max": 59021144}, "Total Batches Seen": {"sum": 59024.0, "count": 1, "min": 59024, "max": 59024}, "Max Records Seen Between Resets": {"sum": 14755286.0, "count": 1, "min": 14755286, "max": 14755286}, "Max Batches Seen Between Resets": {"sum": 14756.0, "count": 1, "min": 14756, "max": 14756}, "Reset Count": {"sum": 8.0, "count": 1, "min": 8, "max": 8}, "Number of Records Since Last Reset": {"sum": 0.0, "count": 1, "min": 0, "max": 0}, "Number of Batches Since Last Reset": {"sum": 0.0, "count": 1, "min": 0, "max": 0}}} [12/30/2021 02:08:01 INFO 140515720472384] #throughput_metric: host=algo-1, train throughput=138834.73698362266 records/second [12/30/2021 02:08:01 WARNING 140515720472384] Already bound, ignoring bind() [12/30/2021 02:08:01 WARNING 140515720472384] optimizer already initialized, ignoring... [12/30/2021 02:08:01 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=0 train binary_classification_accuracy <score>=0.989 [12/30/2021 02:08:01 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=0 train binary_classification_cross_entropy <loss>=0.03831301116943359 [12/30/2021 02:08:08 INFO 140515720472384] Epoch[4] Batch [1000]#011Speed: 138378.57 samples/sec#011binary_classification_accuracy=0.986613#011binary_classification_cross_entropy=0.055956 [12/30/2021 02:08:08 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=1000 train binary_classification_accuracy <score>=0.9866133866133866 [12/30/2021 02:08:08 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=1000 train binary_classification_cross_entropy <loss>=0.055955806751232164 [12/30/2021 02:08:15 INFO 140515720472384] Epoch[4] Batch [2000]#011Speed: 142199.71 samples/sec#011binary_classification_accuracy=0.986809#011binary_classification_cross_entropy=0.055404 [12/30/2021 02:08:15 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=2000 train binary_classification_accuracy <score>=0.986808595702149 [12/30/2021 02:08:15 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=2000 train binary_classification_cross_entropy <loss>=0.05540410045764853 [12/30/2021 02:08:22 INFO 140515720472384] Epoch[4] Batch [3000]#011Speed: 141240.72 samples/sec#011binary_classification_accuracy=0.986862#011binary_classification_cross_entropy=0.055138 [12/30/2021 02:08:22 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=3000 train binary_classification_accuracy <score>=0.9868617127624125 [12/30/2021 02:08:22 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=3000 train binary_classification_cross_entropy <loss>=0.05513837132458685 [12/30/2021 02:08:29 INFO 140515720472384] Epoch[4] Batch [4000]#011Speed: 140339.25 samples/sec#011binary_classification_accuracy=0.986910#011binary_classification_cross_entropy=0.055135 [12/30/2021 02:08:29 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=4000 train binary_classification_accuracy <score>=0.9869095226193452 [12/30/2021 02:08:29 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=4000 train binary_classification_cross_entropy <loss>=0.05513538383072956 [12/30/2021 02:08:36 INFO 140515720472384] Epoch[4] Batch [5000]#011Speed: 141590.68 samples/sec#011binary_classification_accuracy=0.986940#011binary_classification_cross_entropy=0.055006 [12/30/2021 02:08:36 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=5000 train binary_classification_accuracy <score>=0.9869398120375925 [12/30/2021 02:08:36 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=5000 train binary_classification_cross_entropy <loss>=0.05500639757931745 [12/30/2021 02:08:43 INFO 140515720472384] Epoch[4] Batch [6000]#011Speed: 141670.67 samples/sec#011binary_classification_accuracy=0.986956#011binary_classification_cross_entropy=0.055012 [12/30/2021 02:08:43 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=6000 train binary_classification_accuracy <score>=0.9869560073321113 [12/30/2021 02:08:43 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=6000 train binary_classification_cross_entropy <loss>=0.05501184173775323 [12/30/2021 02:08:51 INFO 140515720472384] Epoch[4] Batch [7000]#011Speed: 140456.50 samples/sec#011binary_classification_accuracy=0.986955#011binary_classification_cross_entropy=0.054981 [12/30/2021 02:08:51 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=7000 train binary_classification_accuracy <score>=0.986955434937866 [12/30/2021 02:08:51 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=7000 train binary_classification_cross_entropy <loss>=0.05498078312102156 [12/30/2021 02:08:58 INFO 140515720472384] Epoch[4] Batch [8000]#011Speed: 141328.62 samples/sec#011binary_classification_accuracy=0.986965#011binary_classification_cross_entropy=0.055031 [12/30/2021 02:08:58 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=8000 train binary_classification_accuracy <score>=0.9869648793900763 [12/30/2021 02:08:58 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=8000 train binary_classification_cross_entropy <loss>=0.05503128272255515 [12/30/2021 02:09:05 INFO 140515720472384] Epoch[4] Batch [9000]#011Speed: 139350.05 samples/sec#011binary_classification_accuracy=0.986979#011binary_classification_cross_entropy=0.055005 [12/30/2021 02:09:05 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=9000 train binary_classification_accuracy <score>=0.9869786690367737 [12/30/2021 02:09:05 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=9000 train binary_classification_cross_entropy <loss>=0.05500530243990037 [12/30/2021 02:09:12 INFO 140515720472384] Epoch[4] Batch [10000]#011Speed: 141372.27 samples/sec#011binary_classification_accuracy=0.986984#011binary_classification_cross_entropy=0.054993 [12/30/2021 02:09:12 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=10000 train binary_classification_accuracy <score>=0.986984201579842 [12/30/2021 02:09:12 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=10000 train binary_classification_cross_entropy <loss>=0.05499327250492953 [12/30/2021 02:09:19 INFO 140515720472384] Epoch[4] Batch [11000]#011Speed: 141220.58 samples/sec#011binary_classification_accuracy=0.986964#011binary_classification_cross_entropy=0.055004 [12/30/2021 02:09:19 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=11000 train binary_classification_accuracy <score>=0.9869635487682938 [12/30/2021 02:09:19 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=11000 train binary_classification_cross_entropy <loss>=0.05500396949132457 [12/30/2021 02:09:26 INFO 140515720472384] Epoch[4] Batch [12000]#011Speed: 141963.93 samples/sec#011binary_classification_accuracy=0.986985#011binary_classification_cross_entropy=0.054923 [12/30/2021 02:09:26 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=12000 train binary_classification_accuracy <score>=0.9869846679443379 [12/30/2021 02:09:26 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=12000 train binary_classification_cross_entropy <loss>=0.05492287140196139 [12/30/2021 02:09:33 INFO 140515720472384] Epoch[4] Batch [13000]#011Speed: 142224.19 samples/sec#011binary_classification_accuracy=0.986989#011binary_classification_cross_entropy=0.054910 [12/30/2021 02:09:33 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=13000 train binary_classification_accuracy <score>=0.9869893854318899 [12/30/2021 02:09:33 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=13000 train binary_classification_cross_entropy <loss>=0.054910142276151555 [12/30/2021 02:09:40 INFO 140515720472384] Epoch[4] Batch [14000]#011Speed: 141572.00 samples/sec#011binary_classification_accuracy=0.986991#011binary_classification_cross_entropy=0.054892 [12/30/2021 02:09:40 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=14000 train binary_classification_accuracy <score>=0.9869911434897507 [12/30/2021 02:09:40 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, batch=14000 train binary_classification_cross_entropy <loss>=0.05489249316692931 [12/30/2021 02:09:45 INFO 140515720472384] Epoch[4] Train-binary_classification_accuracy=0.986995 [12/30/2021 02:09:45 INFO 140515720472384] Epoch[4] Train-binary_classification_cross_entropy=0.054866 [12/30/2021 02:09:45 INFO 140515720472384] Epoch[4] Time cost=104.585 [12/30/2021 02:09:45 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, train binary_classification_accuracy <score>=0.9869951883979399 [12/30/2021 02:09:45 INFO 140515720472384] #quality_metric: host=algo-1, epoch=4, train binary_classification_cross_entropy <loss>=0.054866424535535224 #metrics {"StartTime": 1640830081.3049538, "EndTime": 1640830185.8938043, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training"}, "Metrics": {"update.time": {"sum": 104588.45448493958, "count": 1, "min": 104588.45448493958, "max": 104588.45448493958}}} [12/30/2021 02:09:45 INFO 140515720472384] #progress_metric: host=algo-1, completed 100.0 % of epochs #metrics {"StartTime": 1640830081.3053226, "EndTime": 1640830185.8940654, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training", "epoch": 4, "Meta": "training_data_iter"}, "Metrics": {"Total Records Seen": {"sum": 73776430.0, "count": 1, "min": 73776430, "max": 73776430}, "Total Batches Seen": {"sum": 73780.0, "count": 1, "min": 73780, "max": 73780}, "Max Records Seen Between Resets": {"sum": 14755286.0, "count": 1, "min": 14755286, "max": 14755286}, "Max Batches Seen Between Resets": {"sum": 14756.0, "count": 1, "min": 14756, "max": 14756}, "Reset Count": {"sum": 10.0, "count": 1, "min": 10, "max": 10}, "Number of Records Since Last Reset": {"sum": 0.0, "count": 1, "min": 0, "max": 0}, "Number of Batches Since Last Reset": {"sum": 0.0, "count": 1, "min": 0, "max": 0}}} [12/30/2021 02:09:45 INFO 140515720472384] #throughput_metric: host=algo-1, train throughput=141078.91831818267 records/second [12/30/2021 02:09:45 WARNING 140515720472384] wait_for_all_workers will not sync workers since the kv store is not running distributed #metrics {"StartTime": 1640830185.8938808, "EndTime": 1640830185.8945844, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training"}, "Metrics": {"finalize.time": {"sum": 0.1914501190185547, "count": 1, "min": 0.1914501190185547, "max": 0.1914501190185547}}} [12/30/2021 02:09:45 INFO 140515720472384] Saved checkpoint to "/tmp/tmpbntot631/state-0001.params" [12/30/2021 02:09:45 INFO 140515720472384] Test data is not provided. #metrics {"StartTime": 1640830185.8946366, "EndTime": 1640830185.931485, "Dimensions": {"Algorithm": "ipinsights", "Host": "algo-1", "Operation": "training"}, "Metrics": {"setuptime": {"sum": 66.99132919311523, "count": 1, "min": 66.99132919311523, "max": 66.99132919311523}, "totaltime": {"sum": 527582.435131073, "count": 1, "min": 527582.435131073, "max": 527582.435131073}}} 2021-12-30 02:09:59 Uploading - Uploading generated training model 2021-12-30 02:09:59 Completed - Training job completed Training seconds: 619 Billable seconds: 619
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
If you see the message > Completed - Training job completedat the bottom of the output logs then that means training successfully completed and the output of the SageMaker IP Insights model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab and select training job matching the training job name, below:
print(f"Training job name: {ip_insights.latest_training_job.job_name}")
Training job name: ipinsights-2021-12-30-01-57-12-007
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
Inference-----Now that we have trained a SageMaker IP Insights model, we can deploy the model to an endpoint to start performing inference on data. In this case, that means providing it a `` pair and predicting their compatability scores.We can create an inference endpoint using the SageMaker Python SDK `deploy()`function from the job we defined above. We specify the instance type where inference will be performed, as well as the initial number of instnaces to spin up. We recommend using the `ml.m5` instance as it provides the most memory at the lowest cost. Verify how large your model is in S3 and pick the instance type with the appropriate amount of memory.
# predictor = ip_insights.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge") predictor = ip_insights.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")
-------!
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
Congratulations, you now have a SageMaker IP Insights inference endpoint! You could start integrating this endpoint with your production services to start querying incoming requests for abnormal behavior. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name below:
print(f"Endpoint name: {predictor.endpoint}")
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
Data Serialization/DeserializationWe can pass data in a variety of formats to our inference endpoint. In this example, we will pass CSV-formmated data. Other available formats are JSON-formated and JSON Lines-formatted. We make use of the SageMaker Python SDK utilities: `csv_serializer` and `json_deserializer` when configuring the inference endpoint
from sagemaker.predictor import csv_serializer, json_deserializer predictor.serializer = csv_serializer predictor.deserializer = json_deserializer
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
Now that the predictor is configured, it is as easy as passing in a matrix of inference data.We can take a few samples from the simulated dataset above, so we can see what the output looks like.
inference_data = [(data[0], data[1]) for data in train_df[:5].values] predictor.predict( inference_data, initial_args={"ContentType": "text/csv", "Accept": "application/json"} )
The csv_serializer has been renamed in sagemaker>=2. See: https://sagemaker.readthedocs.io/en/stable/v2.html for details. The json_deserializer has been renamed in sagemaker>=2. See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
By default, the predictor will only output the `dot_product` between the learned IP address and the online resource (in this case, the user ID). The dot product summarizes the compatibility between the IP address and online resource. The larger the value, the more the algorithm thinks the IP address is likely to be used by the user. This compatability score is sufficient for most applications, as we can define a threshold for what we constitute as an anomalous score.However, more advanced users may want to inspect the learned embeddings and use them in further applications. We can configure the predictor to provide the learned embeddings by specifing the `verbose=True` parameter to the Accept heading. You should see that each 'prediction' object contains three keys: `ip_embedding`, `entity_embedding`, and `dot_product`.
predictor.predict( inference_data, initial_args={"ContentType": "text/csv", "Accept": "application/json; verbose=True"}, )
The csv_serializer has been renamed in sagemaker>=2. See: https://sagemaker.readthedocs.io/en/stable/v2.html for details. The json_deserializer has been renamed in sagemaker>=2. See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
Compute Anomaly Scores----The `dot_product` output of the model provides a good measure of how compatible an IP address and online resource are. However, the range of the dot_product is unbounded. This means to be able to consider an event as anomolous we need to define a threshold. Such that when we score an event, if the dot_product is above the threshold we can flag the behavior as anomolous.However, picking a threshold can be more of an art, and a good threshold depends on the specifics of your problem and dataset. In the following section, we show how to pick a simple threshold by comparing the score distributions between known normal and malicious traffic:1. We construct a test set of 'Normal' traffic;2. Inject 'Malicious' traffic into the dataset;3. Plot the distribution of dot_product scores for the model on 'Normal' trafic and the 'Malicious' traffic.3. Select a threshold value which separates the normal distribution from the malicious traffic threshold. This value is based on your false-positive tolerance. 1. Construct 'Normal' Traffic DatasetWe previously [created a test set](3.-Create-training-and-test-dataset) from our simulated Apache access logs dataset. We use this test dataset as the 'Normal' traffic in the test case.
test_df.head()
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
2. Inject Malicious TrafficIf we had a dataset with enough real malicious activity, we would use that to determine a good threshold. Those are hard to come by. So instead, we simulate malicious web traffic that mimics a realistic attack scenario. We take a set of user accounts from the test set and randomly generate IP addresses. The users should not have used these IP addresses during training. This simulates an attacker logging in to a user account without knowledge of their IP history.
import numpy as np from generate_data import draw_ip def score_ip_insights(predictor, df): def get_score(result): """Return the negative to the dot product of the predictions from the model.""" return [-prediction["dot_product"] for prediction in result["predictions"]] df = df[["user", "ip_address"]] result = predictor.predict(df.values) # what is df.values return get_score(result) def create_test_case(train_df, test_df, num_samples, attack_freq): """Creates a test case from provided train and test data frames. This generates test case for accounts that are both in training and testing data sets. :param train_df: (panda.DataFrame with columns ['user', 'ip_address']) training DataFrame :param test_df: (panda.DataFrame with columns ['user', 'ip_address']) testing DataFrame :param num_samples: (int) number of test samples to use :param attack_freq: (float) the ratio of negative_samples:positive_samples to generate for test case :return: DataFrame with both good and bad traffic, with labels """ # Get all possible accounts. The IP Insights model can only make predictions on users it has seen in training # Therefore, filter the test dataset for unseen accounts, as their results will not mean anything. valid_accounts = set(train_df["user"]) valid_test_df = test_df[test_df["user"].isin(valid_accounts)] good_traffic = valid_test_df.sample(num_samples, replace=False) good_traffic = good_traffic[["user", "ip_address"]] good_traffic["label"] = 0 # Generate malicious traffic num_bad_traffic = int(num_samples * attack_freq) bad_traffic_accounts = np.random.choice( list(valid_accounts), size=num_bad_traffic, replace=True ) bad_traffic_ips = [draw_ip() for i in range(num_bad_traffic)] bad_traffic = pd.DataFrame({"user": bad_traffic_accounts, "ip_address": bad_traffic_ips}) bad_traffic["label"] = 1 # All traffic labels are: 0 for good traffic; 1 for bad traffic. all_traffic = good_traffic.append(bad_traffic) return all_traffic NUM_SAMPLES = 100000 test_case = create_test_case(train_df, test_df, num_samples=NUM_SAMPLES, attack_freq=1) test_case.head() test_case['label'].value_counts() test_case_scores = score_ip_insights(predictor, test_case)
The csv_serializer has been renamed in sagemaker>=2. See: https://sagemaker.readthedocs.io/en/stable/v2.html for details. The json_deserializer has been renamed in sagemaker>=2. See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
3. Plot DistributionNow, we plot the distribution of scores. Looking at this distribution will inform us on where we can set a good threshold, based on our risk tolerance.
%matplotlib inline import matplotlib.pyplot as plt n, x = np.histogram(test_case_scores[:NUM_SAMPLES], bins=100, density=True) plt.plot(x[1:], n) n, x = np.histogram(test_case_scores[NUM_SAMPLES:], bins=100, density=True) plt.plot(x[1:], n) plt.legend(["Normal", "Random IP"]) plt.xlabel("IP Insights Score") plt.ylabel("Frequency") plt.figure()
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
4. Selecting a Good ThresholdAs we see in the figure above, there is a clear separation between normal traffic and random traffic. We could select a threshold depending on the application.- If we were working with low impact decisions, such as whether to ask for another factor or authentication during login, we could use a `threshold = 0.0`. This would result in catching more true-positives, at the cost of more false-positives. - If our decision system were more sensitive to false positives, we could choose a larger threshold, such as `threshold = 10.0`. That way if we were sending the flagged cases to manual investigation, we would have a higher confidence that the acitivty was suspicious.
threshold = 0.0 flagged_cases = test_case[np.array(test_case_scores) > threshold] num_flagged_cases = len(flagged_cases) num_true_positives = len(flagged_cases[flagged_cases["label"] == 1]) num_false_positives = len(flagged_cases[flagged_cases["label"] == 0]) num_all_positives = len(test_case.loc[test_case["label"] == 1]) print(f"When threshold is set to: {threshold}") print(f"Total of {num_flagged_cases} flagged cases") print(f"Total of {num_true_positives} flagged cases are true positives") print(f"True Positive Rate: {num_true_positives / float(num_flagged_cases)}") print(f"Recall: {num_true_positives / float(num_all_positives)}") print(f"Precision: {num_true_positives / float(num_flagged_cases)}")
When threshold is set to: 0.0 Total of 102539 flagged cases Total of 98149 flagged cases are true positives True Positive Rate: 0.9571870215235179 Recall: 0.98149 Precision: 0.9571870215235179
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
Epilogue----In this notebook, we have showed how to configure the basic training, deployment, and usage of the Amazon SageMaker IP Insights algorithm. All SageMaker algorithms come with support for two additional services that make optimizing and using the algorithm that much easier: Automatic Model Tuning and Batch Transform service. Amazon SageMaker Automatic Model TuningThe results above were based on using the default hyperparameters of the SageMaker IP Insights algorithm. If we wanted to improve the model's performance even more, we can use [Amazon SageMaker Automatic Model Tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html) to automate the process of finding the hyperparameters. Validation DatasetPreviously, we separated our dataset into a training and test set to validate the performance of a single IP Insights model. However, when we do model tuning, we train many IP Insights models in parallel. If we were to use the same test dataset to select the best model, we bias our model selection such that we don't know if we selected the best model in general, or just the best model for that particular dateaset. Therefore, we need to separate our test set into a validation dataset and a test dataset. The validation dataset is used for model selection. Then once we pick the model with the best performance, we evaluate it the winner on a test set just as before. Validation MetricsFor SageMaker Automatic Model Tuning to work, we need an objective metric which determines the performance of the model we want to optimize. Because SageMaker IP Insights is an usupervised algorithm, we do not have a clearly defined metric for performance (such as percentage of fraudulent events discovered). We allow the user to provide a validation set of sample data (same format as training data bove) through the `validation` channel. We then fix the negative sampling strategy to use `random_negative_sampling_rate=1` and `shuffled_negative_sampling_rate=0` and generate a validation dataset by assigning corresponding labels to the real and simulated data. We then calculate the model's `descriminator_auc` metric. We do this by taking the model's predicted labels and the 'true' simulated labels and compute the Area Under ROC Curve (AUC) on the model's performance.We set up the `HyperParameterTuner` to maximize the `discriminator_auc` on the validation dataset. We also need to set the search space for the hyperparameters. We give recommended ranges for the hyperparmaeters in the [Amazon SageMaker IP Insights (Hyperparameters)](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights-hyperparameters.html) documentation.
test_df["timestamp"].describe()
/usr/local/lib/python3.6/site-packages/ipykernel_launcher.py:1: FutureWarning: Treating datetime data as categorical rather than numeric in `.describe` is deprecated and will be removed in a future version of pandas. Specify `datetime_is_numeric=True` to silence this warning and adopt the future behavior now. """Entry point for launching an IPython kernel.
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
The test set we constructed above spans 3 days. We reserve the first day as the validation set and the subsequent two days for the test set.
time_partition = ( datetime(2018, 11, 13, tzinfo=pytz.FixedOffset(0)) if num_time_zones > 1 else datetime(2018, 11, 13) ) validation_df = test_df[test_df["timestamp"] < time_partition] test_df = test_df[test_df["timestamp"] >= time_partition] valid_data = validation_df.to_csv(index=False, header=False, columns=["user", "ip_address"])
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
We then upload the validation data to S3 and specify it as the validation channel.
# Upload data to S3 key validation_data_file = "valid.csv" key = os.path.join(prefix, "validation", validation_data_file) boto3.resource("s3").Bucket(bucket).Object(key).put(Body=valid_data) s3_valid_data = f"s3://{bucket}/{key}" print(f"Validation data has been uploaded to: {s3_valid_data}") # Configure SageMaker IP Insights Input Channels input_data = {"train": s3_train_data, "validation": s3_valid_data} from sagemaker.tuner import HyperparameterTuner, IntegerParameter # Configure HyperparameterTuner ip_insights_tuner = HyperparameterTuner( estimator=ip_insights, # previously-configured Estimator object objective_metric_name="validation:discriminator_auc", hyperparameter_ranges={"vector_dim": IntegerParameter(64, 1024)}, max_jobs=4, max_parallel_jobs=2, ) # Start hyperparameter tuning job ip_insights_tuner.fit(input_data, include_cls_metadata=False) # Wait for all the jobs to finish ip_insights_tuner.wait() # Visualize training job results ip_insights_tuner.analytics().dataframe() # Visualize training job results ip_insights_tuner.analytics().dataframe() # Deploy best model tuned_predictor = ip_insights_tuner.deploy( initial_instance_count=1, instance_type="ml.m4.xlarge", serializer=csv_serializer, deserializer=json_deserializer, ) # Make a prediction against the SageMaker endpoint tuned_predictor.predict( inference_data, initial_args={"ContentType": "text/csv", "Accept": "application/json"} )
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
We should have the best performing model from the training job! Now we can determine thresholds and make predictions just like we did with the inference endpoint [above](Inference). Batch TransformLet's say we want to score all of the login events at the end of the day and aggregate flagged cases for investigators to look at in the morning. If we store the daily login events in S3, we can use IP Insights with [Amazon SageMaker Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html) to run inference and store the IP Insights scores back in S3 for future analysis.Below, we take the training job from before and evaluate it on the validation data we put in S3.
transformer = ip_insights.transformer(instance_count=1, instance_type="ml.m4.xlarge") transformer.transform(s3_valid_data, content_type="text/csv", split_type="Line") # Wait for Transform Job to finish transformer.wait() print(f"Batch Transform output is at: {transformer.output_path}")
_____no_output_____
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
Stop and Delete the EndpointIf you are done with this model, then we should delete the endpoint before we close the notebook. Or else you will continue to pay for the endpoint while it is running. To do so execute the cell below. Alternately, you can navigate to the "Endpoints" tab in the SageMaker console, select the endpoint with the name stored in the variable endpoint_name, and select "Delete" from the "Actions" dropdown menu.
ip_insights_tuner.delete_endpoint() sagemaker.Session().delete_endpoint(predictor.endpoint)
The function delete_endpoint is a no-op in sagemaker>=2. See: https://sagemaker.readthedocs.io/en/stable/v2.html for details. The endpoint attribute has been renamed in sagemaker>=2. See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
MIT
ipinsights-tutorial.ipynb
avoca-dorable/aws_ipinsights
GLUE sets: model will be trained on eval set, so you shouldn't also test on the eval set. The problem is that the labels are withheld for the test set. Start with SNLI. MultiNLI is a later option too. As is rotten_tomatoes. * Victim model performance on dataset train, valid, test set. (done, written code to measure it)* Create new paraphrased valid + test datasets (done a preliminary version on the valid set) * Measure victim model performance on paraphrased datasets (done. on vanilla valid set is about 87% accuracy. generating 16 paraphrases (i.e. not many) and evaluating performance on all of them, we get ~75% accuracy)* Get document embeddings of original and paraphrased and compare (done) * https://github.com/UKPLab/sentence-transformers* Write a simple way to measure paraphrase quality (done) * Construct reward function
%load_ext autoreload %autoreload 2 import os import torch from torch.utils.data import DataLoader from datasets import load_dataset, load_metric import datasets, transformers from transformers import pipeline, AutoModelForSeq2SeqLM, AutoModelForSequenceClassification, AutoTokenizer from pprint import pprint import numpy as np, pandas as pd import scipy from utils import * # local script import pyarrow from sentence_transformers import SentenceTransformer, util from IPython.core.debugger import set_trace from GPUtil import showUtilization import seaborn as sns from itertools import repeat from collections import defaultdict from IPython.display import Markdown path_cache = './cache/' path_results = "./results/" seed = 420 torch.manual_seed(seed) np.random.seed(seed) torch.cuda.manual_seed(seed) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') devicenum = torch.cuda.current_device() if device.type == 'cuda' else -1 n_wkrs = 4 * torch.cuda.device_count() batch_size = 64 pd.set_option("display.max_colwidth", 400) # Paraphrase model (para) para_name = "tuner007/pegasus_paraphrase" para_tokenizer = AutoTokenizer.from_pretrained(para_name) para_model = AutoModelForSeq2SeqLM.from_pretrained(para_name).to(device) # Victim Model (VM) vm_name = "textattack/distilbert-base-cased-snli" vm_tokenizer = AutoTokenizer.from_pretrained(vm_name) vm_model = AutoModelForSequenceClassification.from_pretrained(vm_name).to(device) vm_idx2lbl = vm_model.config.id2label vm_lbl2idx = vm_model.config.label2id vm_num_labels = vm_model.num_labels # Semantic Similarity model embedding_model = SentenceTransformer('paraphrase-distilroberta-base-v1') dataset = load_dataset("snli") train,valid,test = dataset['train'],dataset['validation'],dataset['test'] label_cname = 'label' remove_minus1_labels = lambda x: x[label_cname] != -1 train = train.filter(remove_minus1_labels) valid = valid.filter(remove_minus1_labels) test = test.filter(remove_minus1_labels) # make sure that all datasets have the same number of labels as what the victim model predicts assert train.features[label_cname].num_classes == vm_num_labels assert valid.features[label_cname].num_classes == vm_num_labels assert test.features[ label_cname].num_classes == vm_num_labels train_dl = DataLoader(train, batch_size=batch_size, shuffle=True, num_workers=n_wkrs) valid_dl = DataLoader(valid, batch_size=batch_size, shuffle=True, num_workers=n_wkrs) test_dl = DataLoader( test, batch_size=batch_size, shuffle=True, num_workers=n_wkrs) def get_paraphrases(input_text,num_return_sequences,num_beams, num_beam_groups=1,diversity_penalty=0): batch = para_tokenizer(input_text,truncation=True,padding='longest', return_tensors="pt").to(device) translated = para_model.generate(**batch,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5, num_beam_groups=num_beam_groups, diversity_penalty=diversity_penalty) tgt_text = para_tokenizer.batch_decode(translated, skip_special_tokens=True) return tgt_text def gen_dataset_paraphrases(x, cname_input, cname_output, n_seed_seqs=32): """ x: one row of a dataset. cname_input: column to generate paraphrases for cname_output: column name to give output of paraphrases n_seed_seqs: rough indicator of how many paraphrases to return. For now, keep at 4,8,16,32,64 etc""" # TODO: figure out how to batch this. if n_seed_seqs % 4 != 0: raise ValueError("keep n_seed_seqs divisible by 4 for now") n = n_seed_seqs/2 #low diversity (ld) paraphrases ld_l = get_paraphrases(x[cname_input],num_return_sequences=int(n), num_beams=int(n)) #high diversity (hd) paraphrases. We can use num_beam_groups and diversity_penalty as hyperparameters. hd_l = get_paraphrases(x[cname_input],num_return_sequences=int(n), num_beams=int(n), num_beam_groups=int(n),diversity_penalty=50002.5) l = ld_l + hd_l x[cname_output] = l #TODO: change to list(set(l)) return x # Generate paraphrase dataset n_seed_seqs = 48 date = '20210629' fname = path_cache + 'valid_small_'+ date + '_' + str(n_seed_seqs) if os.path.exists(fname): # simple caching valid_small = datasets.load_from_disk(fname) else: valid_small = valid.shard(20, 0, contiguous=True) valid_small = valid_small.map(lambda x: gen_dataset_paraphrases(x, n_seed_seqs=n_seed_seqs, cname_input='hypothesis', cname_output='hypothesis_paraphrases'), batched=False) valid_small.save_to_disk(fname) # Create a new version of paraphrase dataset by repeating all other fields to be same # length as number of paraphrases. def create_paraphrase_dataset(batch, l_cname): """Repeat the other fields to be the same length as the number of paraphrases. l_cname: column name that contains the list of paraphrases""" return_d = defaultdict(list) for o in zip(*batch.values()): d = dict(zip(batch.keys(), o)) n_paraphrases = len(d[l_cname]) for k,v in d.items(): return_d[k] += v if k == l_cname else [v for o in range(n_paraphrases)] return return_d fname = path_cache + 'valid_small_paraphrases_' + date + '_'+ str(n_seed_seqs) if os.path.exists(fname): valid_small_paraphrases = datasets.load_from_disk(fname) else: # Need to call this with batched=True to work. valid_small_paraphrases = valid_small.map(lambda x: create_paraphrase_dataset(x, l_cname='hypothesis_paraphrases'), batched=True) valid_small_paraphrases.save_to_disk(fname) # Generate results dataframe def get_vm_scores(): """very hacky procedure to generate victim model scores """ # Get preds and accuracy on the paraphrase dataset print("Getting victim model scores.") some_dl = DataLoader(valid_small_paraphrases, batch_size=batch_size, shuffle=False, num_workers=n_wkrs, pin_memory=True) dl = some_dl metric = load_metric('accuracy') para_probs_l,orig_probs_l = [], [] assert vm_model.training == False # checks that model is in eval mode #monitor = Monitor(2) # track GPU usage and memory with torch.no_grad(): for i, data in enumerate(dl): if i % 50 == 0 : print(i, "out of", len(dl)) labels,premise = data['label'].to(device),data["premise"] paraphrases,orig = data["hypothesis_paraphrases"],data["hypothesis"] # predictions for original inputs = vm_tokenizer(premise,orig,padding=True,truncation=True, return_tensors="pt") inputs.to(device) outputs = vm_model(**inputs, labels=labels) probs = outputs.logits.softmax(1) preds = probs.argmax(1) orig_probs_l.append(probs.cpu()) # predictions for paraphrases inputs = vm_tokenizer(premise,paraphrases, padding=True,truncation=True, return_tensors="pt") inputs.to(device) outputs = vm_model(**inputs, labels=labels) probs = outputs.logits.softmax(1) preds = probs.argmax(1) para_probs_l.append(probs.cpu()) metric.add_batch(predictions=preds, references=labels) orig_probs_t, para_probs_t = torch.cat(orig_probs_l),torch.cat(para_probs_l) #monitor.stop() return para_probs_t, orig_probs_t def generate_sim_scores(): """Function to just loop and generate sim scores for each input""" print("Getting similarity scores") sim_score_l = [] for i, data in enumerate(valid_small): if i % 50 == 0 : print(i, "out of", len(valid_small)) orig, para = data['hypothesis'], data['hypothesis_paraphrases'] orig_emb,para_emb = embedding_model.encode(orig),embedding_model.encode(para) cos_sim = util.cos_sim(orig_emb,para_emb)[0] sim_score_l.append(cos_sim) sim_score_t = torch.cat(sim_score_l) return sim_score_t fname = path_cache + 'results_df_'+ date + "_" + str(n_seed_seqs) + ".csv" if os.path.exists(fname): results_df = pd.read_csv(fname) else: sim_score_t = generate_sim_scores() para_probs_t, orig_probs_t = get_vm_scores() vm_para_scores = torch.tensor([r[idx] for idx,r in zip(valid_small_paraphrases['label'],para_probs_t)]) vm_orig_scores = torch.tensor([r[idx] for idx,r in zip(valid_small_paraphrases['label'],orig_probs_t)]) results_df = pd.DataFrame({'premise': valid_small_paraphrases['premise'], 'orig': valid_small_paraphrases['hypothesis'], 'para': valid_small_paraphrases['hypothesis_paraphrases'], 'sim_score': sim_score_t, 'label_true': valid_small_paraphrases['label'], 'label_vm_orig': orig_probs_t.argmax(1), 'label_vm_para': para_probs_t.argmax(1), 'vm_orig_truelabel': vm_orig_scores, 'vm_para_truelabel': vm_para_scores, 'vm_truelabel_change': vm_orig_scores - vm_para_scores, 'vm_orig_class0': orig_probs_t[:,0], 'vm_orig_class1': orig_probs_t[:,1], 'vm_orig_class2': orig_probs_t[:,2], 'vm_para_class0': para_probs_t[:,0], 'vm_para_class1': para_probs_t[:,1], 'vm_para_class2': para_probs_t[:,2] }) results_df['vm_truelabel_change_X_sim_score'] = results_df['vm_truelabel_change'] * results_df['sim_score'] results_df.to_csv(fname, index_label = 'idx')
_____no_output_____
Apache-2.0
archive/sentiment_paraphraser.ipynb
puzzler10/travis_attack
Permutation method to detect label flips Take each example $Ex$ in the filtered set and generate paraphrases (e.g. 16) of it (or it might work better with a simple token-replacement strategy). Run each through the victim model (might be better with a different model, but still trained on dataset) and record predictions. Then tally up the label predictions (or maybe take average of the probabilities). Each prediction is a vote for the true label. Idea is that if $Ex$ changes ground truth label to class 4, then most of the paraphrases of $Ex$ will be of class 4 too. If $Ex$ is truly adversarial, then most of the paraphrases of $Ex$ are likely to be of the original class (or at least of other classes). So in other words: * if `is_adversarial = 1` then we expect most votes to be for other classes to `label_vm_para`. This means we expect more variance in the voting. If we take model confidence for the class of `label_vm_para` and work out entropy/variance, we expect it to be high. * if `is_adversarial = 0` then we expect most votes to be for the same class as `label_vm_para`. This means we expect less variance in the voting. If we take model confidence for the class of `label_vm_para` and work out entropy/variance, we expect it to be low. Variations * Instead of generating further paraphrases for all label flippers, try the checklist tests on the input. e.g. replace number/proper noun* Try systematic perturbations* Record probability of the true class or the predicted class and put it into a distribution. Calculate entropy of it (STRIP style). The idea is that there is some reliable difference in these probabilities between ground-truth flips and otherwise and that entropy can be used as a rough measurement to distinguish between it. * Can try the above while keeping track of sentence embeddings + attention layers
# Read in manually labelled data. This is to track results. fname = path_cache + 'results_df_48_20210514_labelled_subset.csv' dset_advlbl = load_dataset('csv', data_files=fname)['train'].train_test_split(test_size=0.25) train_advlbl,test_advlbl = dset_advlbl['train'],dset_advlbl['test'] # # as pandas df # df_advlbl = pd.read_csv(fname) # train_advlbl,_,test_advlbl = create_train_valid_test(df_advlbl, frac_train=0.75, frac_valid = 0.001) # # To join with the original. (might be some issues with the idx/row-number col) # # x = pd.merge(results_df, df_advlbl, on =['idx', 'premise','orig', 'para'])
Using custom data configuration default-ebc62bd8d2fb84e0 Reusing dataset csv (/data/tproth/.cache/huggingface/datasets/csv/default-ebc62bd8d2fb84e0/0.0.0/2dc6629a9ff6b5697d82c25b73731dd440507a69cbce8b425db50b751e8fcfd0)
Apache-2.0
archive/sentiment_paraphraser.ipynb
puzzler10/travis_attack
Paraphrases of paraphrases nlp dataset -> gen_paraphrases (returns dataset) -> create_paraphrase_dataset -> get vm labels -> save in data frame
n = 48 cols_to_drop = ['is_adversarial','label_true','label_vm_orig','orig','sim_score'] def paraphrase_and_return_dict(x, n_seed_seqs=16): x['perms'] = get_paraphrases(x['para'], num_return_sequences=n, num_beams=n, num_beam_groups=8, diversity_penalty=100000.0) return x train_advlbl_perms = train_advlbl.map(lambda x: paraphrase_and_return_dict(x, n_seed_seqs=n), batched=False, remove_columns = cols_to_drop) train_advlbl_expanded = train_advlbl_perms.map(lambda x: create_paraphrase_dataset(x, l_cname='perms'), batched=True) # Get victim model predictions for each prediction advlbl_expanded_dl = DataLoader(train_advlbl_expanded, batch_size=batch_size, shuffle=False, num_workers=n_wkrs, pin_memory=True) dl = advlbl_expanded_dl probs_l = [] assert vm_model.training == False # checks that model is in eval mode with torch.no_grad(): for i, data in enumerate(dl): if i % 50 == 0 : print(i, "out of", len(dl)) premise,perms = data["premise"],data["perms"] # predictions for original inputs = vm_tokenizer(premise,perms,padding=True,truncation=True, return_tensors="pt") inputs.to(device) outputs = vm_model(**inputs) probs = outputs.logits.softmax(1) # preds = probs.argmax(1) probs_l.append(probs.cpu()) probs_t = torch.cat(probs_l) preds_t = torch.argmax(probs_t,1) # Bring back to original train_advlbl_expanded = train_advlbl_expanded.add_column('vm_label', preds_t.tolist()) train_advlbl_expanded = train_advlbl_expanded.add_column('vm_prob0', probs_t[:,0].tolist()) train_advlbl_expanded = train_advlbl_expanded.add_column('vm_prob1', probs_t[:,1].tolist()) train_advlbl_expanded = train_advlbl_expanded.add_column('vm_prob2', probs_t[:,2].tolist()) # Make into pandas_df advlbl_df = pd.DataFrame(train_advlbl_expanded) advlbl_df.vm_label = advlbl_df.vm_label.astype('category') # Count "votes" of each set of permutations votes_df = advlbl_df.groupby(['idx'])['vm_label'].describe() votes_df = votes_df.rename(columns={'count':'votes','unique': "n_cats_with_votes", "top": 'top_cat', 'freq': 'top_cat_votes'}) # Get entropy and variance from each set of permutations, then choose only the values # that correspond to the predicted label of the paraphrase def get_entropy(x, bins=10): """Return shannon entropy of a vector. Used in pandas summary functions""" # the bins parameters affects the entropy quite a bit (it introduces zeros) hist,_ = np.histogram(x, bins=bins) hist = hist/sum(hist) # turn into PMF (not strictly required for scipy entropy, but easier to interpret) return scipy.stats.entropy(hist) grp = advlbl_df.groupby(['idx'])[['vm_prob0','vm_prob1','vm_prob2']] entropy_df = grp.agg(func = get_entropy) var_df = grp.agg(func = 'var') entropy_df.columns = [o + "_entropy" for o in entropy_df.columns] var_df.columns = [o + "_var" for o in var_df.columns] label_df = advlbl_df[['idx','label_vm_para']].drop_duplicates() def choose_col_of_df_from_label_column(df, labeldf, name='entropy'): """Picks columns of df corresponding to the predicted vm label of the paraphrase. Works only if probs of classes are the first columns of df in order.""" df = df.merge(labeldf,left_index=True, right_on='idx') v = df['label_vm_para'].values # See https://stackoverflow.com/a/61234228/5381490 df[name+'_label_vm_para'] = np.take_along_axis(df.values, v[:,None] ,axis=1) return df entropy_df = choose_col_of_df_from_label_column(entropy_df, label_df, name='entropy') var_df = choose_col_of_df_from_label_column(var_df, label_df, name='var') # Change original labelled set to a pandas data frame and merge it in train_advlbl_df,test_advlbl_df = pd.DataFrame(dset_advlbl['train']),pd.DataFrame(dset_advlbl['test']) train_advlbl_df = pd.merge(train_advlbl_df, votes_df, left_on ='idx', right_index=True) train_advlbl_df = pd.merge(train_advlbl_df, entropy_df[['idx','entropy_label_vm_para']], left_on ='idx', right_on='idx') train_advlbl_df = pd.merge(train_advlbl_df, var_df[['idx', 'var_label_vm_para']], left_on ='idx', right_on='idx') # Calculate label flip percentage and measure success train_advlbl_df['label_flip'] = train_advlbl_df['top_cat'] != train_advlbl_df['label_vm_para'] def permutation_success(x,y): result = None if x == 1 and y == True: result = True elif x == 0 and y == False: result = True elif x == -1 or x == -2: result = "To be determined" else: result = False return result v1,v2 = train_advlbl_df['is_adversarial'].values, train_advlbl_df['label_flip'].values train_advlbl_df['permutation_success'] = list(map(permutation_success, v1,v2)) pd.crosstab(index=train_advlbl_df['label_flip'], columns=train_advlbl_df['is_adversarial'], margins=True) train_advlbl_df.label_flip.value_counts() advlbl_df #### Exploring the method via reporting #### ## Set up parameters idx = train_advlbl_df.sample()[['idx']].values[0][0] #sample an index randomly from the table main_tbl = train_advlbl_df.query("idx==@idx") def getval(cname): return main_tbl.loc[:,cname].values[0] prem,hyp,para,sim_score = getval('premise'),getval('orig'),getval('para'),getval('sim_score') label_true,label_vm_orig,label_vm_para = getval('label_true'),getval('label_vm_orig'),getval('label_vm_para') advlbl = getval('is_adversarial') d_advlbl2str = { 1: "is a **successful** adversarial example", 0: "is **unsuccessful**: it flips the true label", -1: "contains a hypothesis paraphrase that **doesn't make sense** or is nonsensical.", -2: "is **excluded**: the original label might be wrong" } advstr = d_advlbl2str[advlbl] perm_samples = advlbl_df.query("idx==@idx").sample(5).to_markdown() ncats,top_cat,top_cat_votes = getval('n_cats_with_votes'),getval('top_cat'),getval('top_cat_votes') label_flip = top_cat != label_vm_para label_flip_to_orig_label = top_cat == label_vm_orig label_flip_to_diff_label = top_cat != label_vm_para and top_cat != label_vm_orig results_msg = "" if not label_flip: results_msg += "This does not flip the predicted label. \n" if label_flip_to_orig_label: results_msg += "This flips the label to the vm predicted label (" +\ str(label_vm_orig) + ") of the original hypothesis. \n" if label_flip_to_diff_label: results_msg += "This flips the predicted label but to a different class to the vm prediction of the original hypothesis.\n" results_msg += "\n" if advlbl == 1: results_msg += "If the theory is correct we expected a label flip for an adversarial example.\n " if label_flip: results_msg += "The label flip occured, so this was **successful**.\n" else: results_msg += "The label flip did not occur, so this was **unsuccessful**.\n" elif advlbl == 0: results_msg += "If the theory is correct we expect the label does not flip for an unadversarial example.\n " if label_flip: results_msg += "The label flip occured, so this was **unsuccessful**.\n" else: results_msg += "The label flip did not occur, so this was **successful**.\n" elif advlbl == -1: results_msg += "The original paraphrase didn't make sense, so we should figure out how to detect this.\n " else: results_msg += "The SNLI example was wrong or strange: disregard this example.\n" ## Insert into template Markdown(f""" Example with idx **{idx}** {main_tbl.to_markdown(index=True)} * **Premise**: `{prem}` * **Hypothesis (original)**: `{hyp}` (True label **{label_true}**, Victim Model (VM) label **{label_vm_orig}**) * **Hypothesis paraphrase**: `{para}` (VM label **{label_vm_para}**) This example {advstr}. We generate {n} further *permutations* of the hypothesis paraphrase and get VM votes and confidence for each of them. The label of the hypothesis paraphrase was **{label_vm_para}**. Here are five of these permutations (randomly chosen): {perm_samples} **Voting strategy results** We get {ncats} categories with votes. The most voted for category is **label {top_cat}** with {top_cat_votes} votes. The paraphrase initially had label **{label_vm_para}**. {results_msg} Now we look at the variance and entropy of the predicted probabilities of each class. We are interested in class **{label_vm_para}** as it is the label of the hypothesis paraphrase. *Entropy* {entropy_df.query("idx==@idx").round(2).to_markdown(index=True)} *Variance* {var_df.query("idx==@idx").round(2).to_markdown(index=True)} """) # # calculates performance of victim model on a dataloader # dl = valid_dl # metric = load_metric('accuracy') # for i, data in enumerate(dl): # if i % 10 == 0 : print(i, "out of", len(dl)) # labels,premise,hypothesis = data['label'].to(device),data["premise"],data["hypothesis"] # inputs = vm_tokenizer(premise,hypothesis, padding=True,truncation=True, return_tensors="pt") # inputs.to(device) # outputs = vm_model(**inputs, labels=labels) # probs = outputs.logits.softmax(1) # preds = probs.argmax(1) # metric.add_batch(predictions=preds, references=labels) # metric.compute() # # Score semantic similarity with cross encoders # from sentence_transformers.cross_encoder import CrossEncoder # cross_encoder= CrossEncoder('cross-encoder/quora-distilroberta-base') # i =11 # data = valid_small[i] # orig, para = data['hypothesis'], data['hypothesis_paraphrases'] # orig_rep = [orig for i in range(len(para))] # pairs = list(zip(orig_rep,para)) # scores = cross_encoder.predict(pairs) # results_df = pd.DataFrame({'pairs':pairs, 'para': para,'score': cos_sim}) # print(orig) # results_df.sort_values('score', ascending=False) # # with sentence transformers # valid_small_dl = DataLoader(valid_small, batch_size=4, shuffle=False, # num_workers=n_wkrs, pin_memory=True) # sim_score_l = [] # for i, data in enumerate(valid_small_dl): # pass # orig, para = data['hypothesis'], data['hypothesis_paraphrases'] # orig_emb,para_emb = embedding_model.encode(orig),embedding_model.encode(para) # # cos_sim = util.cos_sim(orig_emb,para_emb)[0] # # results_df = pd.DataFrame({'para': para,'score': cos_sim}) # # print(orig) # # results_df.sort_values('score', ascending=False)
_____no_output_____
Apache-2.0
archive/sentiment_paraphraser.ipynb
puzzler10/travis_attack
Basic Linear Algebra- **Pythonic Code**로 짜보기> https://github.com/TEAMLAB-Lecture/AI-python-connect/tree/master/lab_assignments/lab_1 Problem 1 - vector_size_check
# 정답 코드 def vector_size_check(*vector_variables): # input값의 개수가 그때그때 다를 수 있게 asterisk 사용 => input값을 tuple로 묶음 return all(len(vector_variables[0]) == x # all([True,True]) : True, all([True,False]) : False for x in [len(vector) for vector in vector_variables[1:]]) # 실행결과 print(vector_size_check([1,2,3], [2,3,4], [5,6,7])) print(vector_size_check([1,3], [2,4], [6,7])) print(vector_size_check([1,3,4], [4], [6,7]))
True True False
MIT
Python/ML_basic/1.Pythonic Code/1-3.Basic Linear Algebra.ipynb
statKim/TIL
Problem 2 - vector_addition
# 정답 코드 def vector_addition(*vector_variables): if vector_size_check(*vector_variables) == False: raise ArithmeticError return [sum(elements) for elements in zip(*vector_variables)] # 실행결과 print(vector_addition([1,3], [2,4], [6,7])) print(vector_addition([1,5], [10,4], [4,7])) print(vector_addition([1,3,4], [4], [6,7]))
[9, 14] [15, 16]
MIT
Python/ML_basic/1.Pythonic Code/1-3.Basic Linear Algebra.ipynb
statKim/TIL
Problem 3 - vector_subtraction
# 내가 짠 코드 def vector_subtraction(*vector_variables): if vector_size_check(*vector_variables) == False: raise ArithmeticError return [elements[0] - sum(elements[1:]) for elements in zip(*vector_variables)] # 정답 코드 def vector_subtraction(*vector_variables): if vector_size_check(*vector_variables) == False: raise ArithmeticError return [elements[0]*2 - sum(elements) for elements in zip(*vector_variables)] # 실행결과 print(vector_subtraction([1,3], [2,4])) print(vector_subtraction([1,5], [10,4], [4,7]))
[-1, -1] [-13, -6]
MIT
Python/ML_basic/1.Pythonic Code/1-3.Basic Linear Algebra.ipynb
statKim/TIL
Problem 4 - scalar_vector_product
# 내가 짠 코드 def scalar_vector_product(alpha, vector_variable): return [alpha*vec for vec in vector_variable] # 실행결과 print(scalar_vector_product(5, [1,2,3])) print(scalar_vector_product(3, [2,2])) print(scalar_vector_product(4, [1]))
[5, 10, 15] [6, 6] [4]
MIT
Python/ML_basic/1.Pythonic Code/1-3.Basic Linear Algebra.ipynb
statKim/TIL
Problem 5 - matrix_size_check
# 내가 짠 코드 def matrix_size_check(*matrix_variables): return (all(len(matrix_variables[0]) == xdim for xdim in [len(x) for x in matrix_variables]) and all(len(matrix_variables[0][0]) == ydim for ydim in set([len(y) for matrix in matrix_variables for y in matrix]))) # 정답 코드 # 각 행렬의 x, y dimension으로 이루어진 set의 length가 1이면 길이가 같은거임!! def matrix_size_check(*matrix_variables): return (all([len(set(len(matrix[0]) for matrix in matrix_variables)) == 1]) and all([len(matrix_variables[0]) == len(matrix) for matrix in matrix_variables])) # 실행결과 matrix_x = [[2, 2], [2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] matrix_z = [[2, 4], [5, 3]] matrix_w = [[2, 5], [1, 1], [2, 2]] print(matrix_size_check(matrix_x, matrix_y, matrix_z)) print(matrix_size_check(matrix_y, matrix_z)) print(matrix_size_check(matrix_x, matrix_w))
False True True
MIT
Python/ML_basic/1.Pythonic Code/1-3.Basic Linear Algebra.ipynb
statKim/TIL
Problem 6 - is_matrix_equal
# 내가 짠 코드 # 각 원소별로 같은 list에 넣은 다음 set을 취해 중복제거 => length=1 # 이 과정을 모든 matrix의 위치에서 반복해서 전체 length list를 만들고 set을 취해 중복제거한 length가 1이면 같은 matrix def is_matrix_equal(*matrix_variables): return len(set([len(set(elements)) for row in zip(*matrix_variables) for elements in zip(*row)])) == 1 # 정답 코드 def is_matrix_equal(*matrix_variables): # print([matrix for matrix in zip(*matrix_variables)]) return all( [all([len(set(row)) == 1 for row in zip(*matrix)]) # 각 위치에 해당하는 것들을 모아서 set 취한 후 len이 1인지 아닌지 check for matrix in zip(*matrix_variables)] ) # 실행결과 matrix_x = [[2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] print(is_matrix_equal(matrix_x, matrix_y, matrix_y, matrix_y)) print(is_matrix_equal(matrix_x, matrix_x))
False True
MIT
Python/ML_basic/1.Pythonic Code/1-3.Basic Linear Algebra.ipynb
statKim/TIL
Problem 7 - matrix_addition
# 내가 짠 코드 def matrix_addition(*matrix_variables): if matrix_size_check(*matrix_variables) == False: raise ArithmeticError return [ [sum(element) for element in zip(*row)] for row in zip(*matrix_variables)] # 실행결과 matrix_x = [[2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] matrix_z = [[2, 4], [5, 3]] print(matrix_addition(matrix_x, matrix_y)) # Expected value: [[4, 7], [4, 3]] print(matrix_addition(matrix_x, matrix_y, matrix_z)) # Expected value: [[6, 11], [9, 6]]
[[4, 7], [4, 3]] [[6, 11], [9, 6]]
MIT
Python/ML_basic/1.Pythonic Code/1-3.Basic Linear Algebra.ipynb
statKim/TIL
Problem 8 - matrix_subtraction
# 내가 짠 코드 def matrix_subtraction(*matrix_variables): if matrix_size_check(*matrix_variables) == False: raise ArithmeticError return [ [element[0] - sum(element[1:]) for element in zip(*row)] for row in zip(*matrix_variables)] # 정답 코드 def matrix_subtraction(*matrix_variables): if matrix_size_check(*matrix_variables) == False: raise ArithmeticError return [ [2*element[0] - sum(element) for element in zip(*row)] for row in zip(*matrix_variables)] # 실행결과 matrix_x = [[2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] matrix_z = [[2, 4], [5, 3]] print(matrix_subtraction(matrix_x, matrix_y)) # Expected value: [[0, -3], [0, 1]] print(matrix_subtraction(matrix_x, matrix_y, matrix_z)) # Expected value: [[-2, -7], [-5, -2]]
[[0, -3], [0, 1]] [[-2, -7], [-5, -2]]
MIT
Python/ML_basic/1.Pythonic Code/1-3.Basic Linear Algebra.ipynb
statKim/TIL
Problem 9 - matrix_transpose
# 내가 짠 코드 def matrix_transpose(matrix_variable): return [[*new_row] for new_row in zip(*matrix_variable)] # 정답 코드 def matrix_transpose(matrix_variable): return [ [element for element in row] for row in zip(*matrix_variable)] # 실행결과 matrix_w = [[2, 5], [1, 1], [2, 2]] matrix_transpose(matrix_w)
_____no_output_____
MIT
Python/ML_basic/1.Pythonic Code/1-3.Basic Linear Algebra.ipynb
statKim/TIL
Problem 10 - scalar_matrix_product
# 내가 짠 코드 def scalar_matrix_product(alpha, matrix_variable): return [ [alpha*element for element in row] for row in matrix_variable] # 정답 코드 def scalar_matrix_product(alpha, matrix_variable): return [ scalar_vector_product(alpha, row) for row in matrix_variable] # 실행결과 matrix_x = [[2, 2], [2, 2], [2, 2]] matrix_y = [[2, 5], [2, 1]] matrix_z = [[2, 4], [5, 3]] matrix_w = [[2, 5], [1, 1], [2, 2]] print(scalar_matrix_product(3, matrix_x)) #Expected value: [[6, 6], [6, 6], [6, 6]] print(scalar_matrix_product(2, matrix_y)) #Expected value: [[4, 10], [4, 2]] print(scalar_matrix_product(4, matrix_z)) #Expected value: [[8, 16], [20, 12]] print(scalar_matrix_product(3, matrix_w)) #Expected value: [[6, 15], [3, 3], [6, 6]]
[[6, 6], [6, 6], [6, 6]] [[4, 10], [4, 2]] [[8, 16], [20, 12]] [[6, 15], [3, 3], [6, 6]]
MIT
Python/ML_basic/1.Pythonic Code/1-3.Basic Linear Algebra.ipynb
statKim/TIL
Problem 11 - is_product_availability_matrix
# 내가 짠 코드 def is_product_availability_matrix(matrix_a, matrix_b): return len(matrix_a[0]) == len(matrix_b) # 정답 코드 def is_product_availability_matrix(matrix_a, matrix_b): return len([column_vector for column_vector in zip(*matrix_a)]) == len(matrix_b) # 실행결과 matrix_x= [[2, 5], [1, 1]] matrix_y = [[1, 1, 2], [2, 1, 1]] matrix_z = [[2, 4], [5, 3], [1, 3]] print(is_product_availability_matrix(matrix_y, matrix_z)) # Expected value: True print(is_product_availability_matrix(matrix_z, matrix_x)) # Expected value: True print(is_product_availability_matrix(matrix_z, matrix_w)) # Expected value: False //matrix_w가없습니다 print(is_product_availability_matrix(matrix_x, matrix_x)) # Expected value: True
True True False True
MIT
Python/ML_basic/1.Pythonic Code/1-3.Basic Linear Algebra.ipynb
statKim/TIL
Problem 12 - matrix_product
# 내가 짠 코드 def matrix_product(matrix_a, matrix_b): if is_product_availability_matrix(matrix_a, matrix_b) == False: raise ArithmeticError return [ [ sum( [element[0] * element[1] for element in zip(column, row)] ) for column in zip(*matrix_b) ] for row in matrix_a] # 정답 코드 def matrix_product(matrix_a, matrix_b): if is_product_availability_matrix(matrix_a, matrix_b) == False: raise ArithmeticError return [ [sum(a*b for a,b in zip(row_a, column_b)) for column_b in zip(*matrix_b) ] for row_a in matrix_a] # 실행결과 matrix_x= [[2, 5], [1, 1]] matrix_y = [[1, 1, 2], [2, 1, 1]] matrix_z = [[2, 4], [5, 3], [1, 3]] print(matrix_product(matrix_y, matrix_z)) # Expected value: [[9, 13], [10, 14]] print(matrix_product(matrix_z, matrix_x)) # Expected value: [[8, 14], [13, 28], [5, 8]] print(matrix_product(matrix_x, matrix_x)) # Expected value: [[9, 15], [3, 6]] print(matrix_product(matrix_z, matrix_w)) # Expected value: False
[[9, 13], [10, 14]] [[8, 14], [13, 28], [5, 8]] [[9, 15], [3, 6]]
MIT
Python/ML_basic/1.Pythonic Code/1-3.Basic Linear Algebra.ipynb
statKim/TIL
Copyright 2018 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
Transfer learning with TensorFlow Hub View on TensorFlow.org Run in Google Colab View on GitHub Download notebook See TF Hub model [TensorFlow Hub](https://tfhub.dev/) is a repository of pre-trained TensorFlow models.This tutorial demonstrates how to:1. Use models from TensorFlow Hub with `tf.keras`1. Use an image classification model from TensorFlow Hub1. Do simple transfer learning to fine-tune a model for your own image classes Setup
import numpy as np import time import PIL.Image as Image import matplotlib.pylab as plt import tensorflow as tf import tensorflow_hub as hub
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
An ImageNet classifierYou'll start by using a pretrained classifer model to take an image and predict what it's an image of - no training required! Download the classifierUse `hub.KerasLayer` to load a [MobileNetV2 model](https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/2) from TensorFlow Hub. Any [compatible image classifier model](https://tfhub.dev/s?q=tf2&module-type=image-classification) from tfhub.dev will work here.
classifier_model ="https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4" #@param {type:"string"} IMAGE_SHAPE = (224, 224) classifier = tf.keras.Sequential([ hub.KerasLayer(classifier_model, input_shape=IMAGE_SHAPE+(3,)) ])
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
Run it on a single image Download a single image to try the model on.
grace_hopper = tf.keras.utils.get_file('image.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg') grace_hopper = Image.open(grace_hopper).resize(IMAGE_SHAPE) grace_hopper grace_hopper = np.array(grace_hopper)/255.0 grace_hopper.shape
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
Add a batch dimension, and pass the image to the model.
result = classifier.predict(grace_hopper[np.newaxis, ...]) result.shape
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
The result is a 1001 element vector of logits, rating the probability of each class for the image.So the top class ID can be found with argmax:
predicted_class = np.argmax(result[0], axis=-1) predicted_class
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
Decode the predictionsTake the predicted class ID and fetch the `ImageNet` labels to decode the predictions
labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt') imagenet_labels = np.array(open(labels_path).read().splitlines()) plt.imshow(grace_hopper) plt.axis('off') predicted_class_name = imagenet_labels[predicted_class] _ = plt.title("Prediction: " + predicted_class_name.title())
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
Simple transfer learning But what if you want to train a classifier for a dataset with different classes? You can also use a model from TFHub to train a custom image classier by retraining the top layer of the model to recognize the classes in our dataset. Dataset For this example you will use the TensorFlow flowers dataset:
data_root = tf.keras.utils.get_file( 'flower_photos','https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True)
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
The simplest way to load this data into our model is using `tf.keras.preprocessing.image.ImageDataGenerator`,TensorFlow Hub's conventions for image models is to expect float inputs in the `[0, 1]` range. Use the `ImageDataGenerator`'s `rescale` parameter to achieve this.
image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1/255) image_data = image_generator.flow_from_directory(str(data_root), target_size=IMAGE_SHAPE)
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
The resulting object is an iterator that returns `image_batch, label_batch` pairs.
for image_batch, label_batch in image_data: print("Image batch shape: ", image_batch.shape) print("Label batch shape: ", label_batch.shape) break
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
Run the classifier on a batch of images Now run the classifier on the image batch.
result_batch = classifier.predict(image_batch) result_batch.shape predicted_class_names = imagenet_labels[np.argmax(result_batch, axis=-1)] predicted_class_names
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
Now check how these predictions line up with the images:
plt.figure(figsize=(10,9)) plt.subplots_adjust(hspace=0.5) for n in range(30): plt.subplot(6,5,n+1) plt.imshow(image_batch[n]) plt.title(predicted_class_names[n]) plt.axis('off') _ = plt.suptitle("ImageNet predictions")
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
See the `LICENSE.txt` file for image attributions.The results are far from perfect, but reasonable considering that these are not the classes the model was trained for (except "daisy"). Download the headless modelTensorFlow Hub also distributes models without the top classification layer. These can be used to easily do transfer learning.Any [compatible image feature vector model](https://tfhub.dev/s?module-type=image-feature-vector&q=tf2) from tfhub.dev will work here.
feature_extractor_model = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4" #@param {type:"string"}
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
Create the feature extractor. Use `trainable=False` to freeze the variables in the feature extractor layer, so that the training only modifies the new classifier layer.
feature_extractor_layer = hub.KerasLayer( feature_extractor_model, input_shape=(224, 224, 3), trainable=False)
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
It returns a 1280-length vector for each image:
feature_batch = feature_extractor_layer(image_batch) print(feature_batch.shape)
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
Attach a classification headNow wrap the hub layer in a `tf.keras.Sequential` model, and add a new classification layer.
model = tf.keras.Sequential([ feature_extractor_layer, tf.keras.layers.Dense(image_data.num_classes) ]) model.summary() predictions = model(image_batch) predictions.shape
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
Train the modelUse compile to configure the training process:
model.compile( optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True), metrics=['acc'])
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs
Now use the `.fit` method to train the model.To keep this example short train just 2 epochs. To visualize the training progress, use a custom callback to log the loss and accuracy of each batch individually, instead of the epoch average.
class CollectBatchStats(tf.keras.callbacks.Callback): def __init__(self): self.batch_losses = [] self.batch_acc = [] def on_train_batch_end(self, batch, logs=None): self.batch_losses.append(logs['loss']) self.batch_acc.append(logs['acc']) self.model.reset_metrics() steps_per_epoch = np.ceil(image_data.samples/image_data.batch_size) batch_stats_callback = CollectBatchStats() history = model.fit(image_data, epochs=2, steps_per_epoch=steps_per_epoch, callbacks=[batch_stats_callback])
_____no_output_____
Apache-2.0
site/en/tutorials/images/transfer_learning_with_hub.ipynb
miried/tensorflow-docs